[PATCH v2 2/4] tcp: Re-introduce inactivity timeouts based on a clock algorithm
We previously had a mechanism to remove TCP connections which were
inactive for 2 hours. That was broken for a long time, due to poor
interactions with the timerfd handling, so we removed it.
Adding this long scale timer onto the timerfd handling, which mostly
handles much shorter timeouts is tricky to reason about. However, for the
inactivity timeouts, we don't require precision. Instead, we can use
a 1-bit page replacement / "clock" algorithm. Every INACTIVITY_INTERVAL
(2 hours), a global timer marks every TCP connection as tentatively
inactive. That flag is cleared if we get any events, either tap side or
socket side.
If the inactive flag is still set when the next INACTIVITY_INTERVAL expires
then the connection has been inactive for an extended period and we reset
and close it. In practice this means that connections will be removed
after 2-4 hours of inactivity.
This is not a true fix for bug 179, but it does mitigate the damage, by
limiting the time that inactive connections will remain around,
Link: https://bugs.passt.top/show_bug.cgi?id=179
Signed-off-by: David Gibson
On Fri, 6 Feb 2026 17:17:37 +1100
David Gibson
We previously had a mechanism to remove TCP connections which were inactive for 2 hours. That was broken for a long time, due to poor interactions with the timerfd handling, so we removed it.
Adding this long scale timer onto the timerfd handling, which mostly handles much shorter timeouts is tricky to reason about. However, for the inactivity timeouts, we don't require precision. Instead, we can use a 1-bit page replacement / "clock" algorithm. Every INACTIVITY_INTERVAL (2 hours), a global timer marks every TCP connection as tentatively inactive. That flag is cleared if we get any events, either tap side or socket side.
If the inactive flag is still set when the next INACTIVITY_INTERVAL expires then the connection has been inactive for an extended period and we reset and close it. In practice this means that connections will be removed after 2-4 hours of inactivity.
This is not a true fix for bug 179, but it does mitigate the damage, by limiting the time that inactive connections will remain around,
Link: https://bugs.passt.top/show_bug.cgi?id=179 Signed-off-by: David Gibson
--- tcp.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++---- tcp.h | 4 +++- tcp_conn.h | 3 +++ 3 files changed, 55 insertions(+), 5 deletions(-) diff --git a/tcp.c b/tcp.c index f8663369..09929ee9 100644 --- a/tcp.c +++ b/tcp.c @@ -198,6 +198,13 @@ * TCP_INFO, with a representable range from RTT_STORE_MIN (100 us) to * RTT_STORE_MAX (3276.8 ms). The timeout value is clamped accordingly. * + * We also use a global interval timer for an activity timeout which doesn't + * require precision: + * + * - INACTIVITY_INTERVAL: if a connection has had no activity for an entire + * interval, close and reset it. This means that idle connections (without + * keepalives) will be removed between INACTIVITY_INTERVAL seconds and + * 2*INACTIVITY_INTERVAL seconds after the last activity. * * Summary of data flows (with ESTABLISHED event) * ---------------------------------------------- @@ -333,7 +340,8 @@ enum {
#define RTO_INIT 1 /* s, RFC 6298 */ #define RTO_INIT_AFTER_SYN_RETRIES 3 /* s, RFC 6298 */ -#define ACT_TIMEOUT 7200 + +#define INACTIVITY_INTERVAL 7200 /* s */
#define LOW_RTT_TABLE_SIZE 8 #define LOW_RTT_THRESHOLD 10 /* us */ @@ -2254,6 +2262,8 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, return 1; }
+ conn->inactive = false; + if (th->ack && !(conn->events & ESTABLISHED)) tcp_update_seqack_from_tap(c, conn, ntohl(th->ack_seq));
@@ -2622,6 +2632,8 @@ void tcp_sock_handler(const struct ctx *c, union epoll_ref ref, return; }
+ conn->inactive = false; + if ((conn->events & TAP_FIN_ACKED) && (events & EPOLLHUP)) { conn_event(c, conn, CLOSED); return; @@ -2872,18 +2884,51 @@ int tcp_init(struct ctx *c) return 0; }
+/** + * tcp_inactivity() - Scan for and close long-inactive connections + * @c: Execution context + * @now: Current timestamp + */ +static void tcp_inactivity(struct ctx *c, const struct timespec *now) +{ + union flow *flow; + + if (now->tv_sec - c->tcp.inactivity_run < INACTIVITY_INTERVAL) + return; + + debug("TCP inactivity scan"); + c->tcp.inactivity_run = now->tv_sec; + + flow_foreach(flow) {
Nit: this could be flow_foreach_of_type((flow), FLOW_TCP), or, given that it's the second usage of that, we could finally introduce a foreach_tcp_flow() macro, and rebuild foreach_established_tcp_flow() on top of that. Using foreach_established_tcp_flow() should be equivalent here by the way, because in all non-established cases we should have shorter timeouts, but it looks unnecessarily fragile. Same for tcp_keepalive() from 4/4.
+ struct tcp_tap_conn *conn = &flow->tcp; + + if (flow->f.type != FLOW_TCP) + continue; + + if (conn->inactive) { + /* No activity in this interval, reset */ + flow_dbg(conn, "Inactive for at least %us, resetting", + INACTIVITY_INTERVAL); + tcp_rst(c, conn); + } + + /* Ready to check for next interval */ + conn->inactive = true; + } +} + /** * tcp_timer() - Periodic tasks: port detection, closed connections, pool refill * @c: Execution context * @now: Current timestamp */ -void tcp_timer(const struct ctx *c, const struct timespec *now) +void tcp_timer(struct ctx *c, const struct timespec *now) { - (void)now; - tcp_sock_refill_init(c); if (c->mode == MODE_PASTA) tcp_splice_refill(c); + + tcp_inactivity(c, now); }
/** diff --git a/tcp.h b/tcp.h index 24b90870..e104d453 100644 --- a/tcp.h +++ b/tcp.h @@ -21,7 +21,7 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, int tcp_listen(const struct ctx *c, uint8_t pif, unsigned rule, const union inany_addr *addr, const char *ifname, in_port_t port); int tcp_init(struct ctx *c); -void tcp_timer(const struct ctx *c, const struct timespec *now); +void tcp_timer(struct ctx *c, const struct timespec *now); void tcp_defer_handler(struct ctx *c);
void tcp_update_l2_buf(const unsigned char *eth_d); @@ -38,6 +38,7 @@ extern bool peek_offset_cap; * @rto_max: Maximum retry timeout (in s) * @syn_retries: SYN retries using exponential backoff timeout * @syn_linear_timeouts: SYN retries before using exponential backoff timeout + * @inactivity_run: Time we last scanned for inactive connections */ struct tcp_ctx { struct fwd_ports fwd_in; @@ -47,6 +48,7 @@ struct tcp_ctx { int rto_max; uint8_t syn_retries; uint8_t syn_linear_timeouts; + time_t inactivity_run; };
#endif /* TCP_H */ diff --git a/tcp_conn.h b/tcp_conn.h index 21cea109..7197ff63 100644 --- a/tcp_conn.h +++ b/tcp_conn.h @@ -16,6 +16,7 @@ * @ws_from_tap: Window scaling factor advertised from tap/guest * @ws_to_tap: Window scaling factor advertised to tap/guest * @tap_mss: MSS advertised by tap/guest, rounded to 2 ^ TCP_MSS_BITS + * @inactive: No activity within the current INACTIVITY_INTERVAL * @sock: Socket descriptor number * @events: Connection events, implying connection states * @timer: timerfd descriptor for timeout events @@ -57,6 +58,8 @@ struct tcp_tap_conn { (conn->rtt_exp = MIN(RTT_EXP_MAX, ilog2(MAX(1, rtt / RTT_STORE_MIN)))) #define RTT_GET(conn) (RTT_STORE_MIN << conn->rtt_exp)
+ bool inactive :1; + int sock :FD_REF_BITS;
uint8_t events;
-- Stefano
On Wed, Feb 25, 2026 at 07:15:41AM +0100, Stefano Brivio wrote:
On Fri, 6 Feb 2026 17:17:37 +1100 David Gibson
wrote: We previously had a mechanism to remove TCP connections which were inactive for 2 hours. That was broken for a long time, due to poor interactions with the timerfd handling, so we removed it.
Adding this long scale timer onto the timerfd handling, which mostly handles much shorter timeouts is tricky to reason about. However, for the inactivity timeouts, we don't require precision. Instead, we can use a 1-bit page replacement / "clock" algorithm. Every INACTIVITY_INTERVAL (2 hours), a global timer marks every TCP connection as tentatively inactive. That flag is cleared if we get any events, either tap side or socket side.
If the inactive flag is still set when the next INACTIVITY_INTERVAL expires then the connection has been inactive for an extended period and we reset and close it. In practice this means that connections will be removed after 2-4 hours of inactivity.
This is not a true fix for bug 179, but it does mitigate the damage, by limiting the time that inactive connections will remain around,
Link: https://bugs.passt.top/show_bug.cgi?id=179 Signed-off-by: David Gibson
--- tcp.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++---- tcp.h | 4 +++- tcp_conn.h | 3 +++ 3 files changed, 55 insertions(+), 5 deletions(-) diff --git a/tcp.c b/tcp.c index f8663369..09929ee9 100644 --- a/tcp.c +++ b/tcp.c @@ -198,6 +198,13 @@ * TCP_INFO, with a representable range from RTT_STORE_MIN (100 us) to * RTT_STORE_MAX (3276.8 ms). The timeout value is clamped accordingly. * + * We also use a global interval timer for an activity timeout which doesn't + * require precision: + * + * - INACTIVITY_INTERVAL: if a connection has had no activity for an entire + * interval, close and reset it. This means that idle connections (without + * keepalives) will be removed between INACTIVITY_INTERVAL seconds and + * 2*INACTIVITY_INTERVAL seconds after the last activity. * * Summary of data flows (with ESTABLISHED event) * ---------------------------------------------- @@ -333,7 +340,8 @@ enum {
#define RTO_INIT 1 /* s, RFC 6298 */ #define RTO_INIT_AFTER_SYN_RETRIES 3 /* s, RFC 6298 */ -#define ACT_TIMEOUT 7200 + +#define INACTIVITY_INTERVAL 7200 /* s */
#define LOW_RTT_TABLE_SIZE 8 #define LOW_RTT_THRESHOLD 10 /* us */ @@ -2254,6 +2262,8 @@ int tcp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, return 1; }
+ conn->inactive = false; + if (th->ack && !(conn->events & ESTABLISHED)) tcp_update_seqack_from_tap(c, conn, ntohl(th->ack_seq));
@@ -2622,6 +2632,8 @@ void tcp_sock_handler(const struct ctx *c, union epoll_ref ref, return; }
+ conn->inactive = false; + if ((conn->events & TAP_FIN_ACKED) && (events & EPOLLHUP)) { conn_event(c, conn, CLOSED); return; @@ -2872,18 +2884,51 @@ int tcp_init(struct ctx *c) return 0; }
+/** + * tcp_inactivity() - Scan for and close long-inactive connections + * @c: Execution context + * @now: Current timestamp + */ +static void tcp_inactivity(struct ctx *c, const struct timespec *now) +{ + union flow *flow; + + if (now->tv_sec - c->tcp.inactivity_run < INACTIVITY_INTERVAL) + return; + + debug("TCP inactivity scan"); + c->tcp.inactivity_run = now->tv_sec; + + flow_foreach(flow) {
Nit: this could be flow_foreach_of_type((flow), FLOW_TCP), or, given
Oops, I forgot I created that.
that it's the second usage of that, we could finally introduce a foreach_tcp_flow() macro, and rebuild foreach_established_tcp_flow() on top of that.
I looked into making a foreach_tcp_flow() macro that used a struct tcp_conn * instead of a union flow *. I think it's possible but it was pretty fiddly, so I gave up. Given that, I'm more comfortable keeping flow_foreach_of_type(). Patch using it for these cases posted.
Using foreach_established_tcp_flow() should be equivalent here by the way, because in all non-established cases we should have shorter timeouts, but it looks unnecessarily fragile.
Agreed.
Same for tcp_keepalive() from 4/4.
Done. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson
participants (2)
-
David Gibson
-
Stefano Brivio