On Thu, 7 Dec 2023 15:11:42 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:On Wed, Dec 06, 2023 at 11:43:29PM +0100, Stefano Brivio wrote:Hah, this ^^^On Mon, 4 Dec 2023 14:16:09 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:Ugh... this is doing my head in a bit, because there are a bunch of stacked negatives. Ok, so the original is: "If r lies cyclically between i and j, go back to R2" Or equivalently a loop body of if (in_mod_range(r, i, j)) continue; /* Step R4/R1 stuff */ Now in this version we have r => h, i => s and j => b, so if (in_mod_range(h, s, b)) continue; /* Step R4/R1 stuff */ Or equivalently if (!in_mod_range(h, s, b)) /* Step R4/R1 stuff */; And because of how "cyclically between" works, that becomes:Currently we deal with hash collisions by letting a hash bucket contain multiple entries, forming a linked list using an index in the connection structure. That's a pretty standard and simple approach, but in our case we can use an even simpler one: linear probing. Here if a hash bucket is occupied we just move onto the next one until we find a feww one. This slightly simplifies lookup and more importantly saves some precious bytes in the connection structure by removing the need for a link. It does require some additional complexity for hash removal. This approach can perform poorly with hash table load is high. However, we already size our hash table of pointers larger than the connection table, which puts an upper bound on the load. It's relatively cheap to decrease that bound if we find we need to. I adapted the linear probing operations from Knuth's The Art of Computer Programming, Volume 3, 2nd Edition. Specifically Algorithm L and Algorithm R in Section 6.4. Note that there is an error in Algorithm R as printed, see errata at [0]. [0] https://www-cs-faculty.stanford.edu/~knuth/all3-prepre.ps.gz Signed-off-by: David Gibson <david(a)gibson.dropbear.id.au> --- tcp.c | 111 +++++++++++++++++++++++++++-------------------------- tcp_conn.h | 2 - util.h | 13 +++++++ 3 files changed, 69 insertions(+), 57 deletions(-) diff --git a/tcp.c b/tcp.c index 17c7cba..09acf7f 100644 --- a/tcp.c +++ b/tcp.c @@ -573,22 +573,12 @@ static unsigned int tcp6_l2_flags_buf_used; #define CONN(idx) (&(FLOW(idx)->tcp)) -/** conn_at_idx() - Find a connection by index, if present - * @idx: Index of connection to lookup - * - * Return: pointer to connection, or NULL if @idx is out of bounds - */ -static inline struct tcp_tap_conn *conn_at_idx(unsigned idx) -{ - if (idx >= FLOW_MAX) - return NULL; - ASSERT(CONN(idx)->f.type == FLOW_TCP); - return CONN(idx); -} - /* Table for lookup from remote address, local port, remote port */ static struct tcp_tap_conn *tc_hash[TCP_HASH_TABLE_SIZE]; +static_assert(ARRAY_SIZE(tc_hash) >= FLOW_MAX, + "Safe linear probing requires hash table larger than connection table"); + /* Pools for pre-opened sockets (in init) */ int init_sock_pool4 [TCP_SOCK_POOL_SIZE]; int init_sock_pool6 [TCP_SOCK_POOL_SIZE]; @@ -1196,6 +1186,27 @@ static unsigned int tcp_conn_hash(const struct ctx *c, return tcp_hash(c, &conn->faddr, conn->eport, conn->fport); } +/** + * tcp_hash_probe() - Find hash bucket for a connection + * @c: Execution context + * @conn: Connection to find bucket for + * + * Return: If @conn is in the table, its current bucket, otherwise a suitable + * free bucket for it. + */ +static inline unsigned tcp_hash_probe(const struct ctx *c, + const struct tcp_tap_conn *conn) +{ + unsigned b; + + /* Linear probing */ + for (b = tcp_conn_hash(c, conn); tc_hash[b] && tc_hash[b] != conn; + b = (b + 1) % TCP_HASH_TABLE_SIZE) + ; + + return b; +} + /** * tcp_hash_insert() - Insert connection into hash table, chain link * @c: Execution context @@ -1203,14 +1214,10 @@ static unsigned int tcp_conn_hash(const struct ctx *c, */ static void tcp_hash_insert(const struct ctx *c, struct tcp_tap_conn *conn) { - int b; + unsigned b = tcp_hash_probe(c, conn); - b = tcp_hash(c, &conn->faddr, conn->eport, conn->fport); - conn->next_index = tc_hash[b] ? FLOW_IDX(tc_hash[b]) : -1U; tc_hash[b] = conn; - - flow_dbg(conn, "hash table insert: sock %i, bucket: %i, next: %p", - conn->sock, b, (void *)conn_at_idx(conn->next_index)); + flow_dbg(conn, "hash table insert: sock %i, bucket: %u", conn->sock, b); } /** @@ -1221,23 +1228,27 @@ static void tcp_hash_insert(const struct ctx *c, struct tcp_tap_conn *conn) static void tcp_hash_remove(const struct ctx *c, const struct tcp_tap_conn *conn) { - struct tcp_tap_conn *entry, *prev = NULL; - int b = tcp_conn_hash(c, conn); + unsigned b = tcp_hash_probe(c, conn), s; - for (entry = tc_hash[b]; entry; - prev = entry, entry = conn_at_idx(entry->next_index)) { - if (entry == conn) { - if (prev) - prev->next_index = conn->next_index; - else - tc_hash[b] = conn_at_idx(conn->next_index); - break; + if (!tc_hash[b]) + return; /* Redundant remove */ + + flow_dbg(conn, "hash table remove: sock %i, bucket: %u", conn->sock, b); + + /* Scan the remainder of the cluster */ + for (s = (b + 1) % TCP_HASH_TABLE_SIZE; tc_hash[s]; + s = (s + 1) % TCP_HASH_TABLE_SIZE) { + unsigned h = tcp_conn_hash(c, tc_hash[s]); + + if (in_mod_range(h, b, s, TCP_HASH_TABLE_SIZE)) { + /* tc_hash[s] can live in tc_hash[b]'s slot */ + debug("hash table remove: shuffle %u -> %u", s, b); + tc_hash[b] = tc_hash[s]; + b = s; } }This makes intuitively sense to me, but I can't wrap my head around the fact that it corresponds to algorithm R. Step R3 implies that, if h *is* (cyclically) between b and s, you should skip the move and go back to R2 right away. The condition here seems to be reversed, though. What am I missing?if (in_mod_range(h, b, s)) /* Step R4/R1 stuff */;is what I was missing. -- Stefano