[PATCH v5 00/19] RFC: Unified flow table
This is a fourth draft of the first steps in implementing more general "connection" tracking, as described at: https://pad.passt.top/p/NewForwardingModel This series changes the TCP connection table and hash table into a more general flow table that can track other protocols as well. Each flow uniformly keeps track of all the relevant addresses and ports, which will allow for more robust control of NAT and port forwarding. ICMP is converted to use the new flow table. This doesn't include UDP, but I'm working on it right now and making progress. I'm posting this to give a head start on the review :) Caveats: * We significantly increase the size of a connection/flow entry Changes since v4: * flowside_from_af() no longer fills in unspecified addresses when passed NULL * Split and rename flow hash lookup function * Clarified flow state transitions, and enforced where practical * Made side 0 always the initiating side of a flow, rather than letting the protocol specific code decide * Separated pifs from flowside addresses to allow better structure packing Changes since v3: * Complex rebase on top of the many things that have happened upstream since v2. * Assorted other changes. * Replace TAPFSIDE() and SOCKFSIDE() macros with local variables. Changes since v2: * Cosmetic fixes based on review * Extra doc comments for enum flow_type * Rename flowside to flowaddrs which turns out to make more sense in light of future changes * Fix bug where the socket flowaddrs for tap initiated connections wasn't initialised to match the socket address we were using in the case of map-gw NAT * New flowaddrs_from_sock() helper used in most cases which is cleaner and should avoid bugs like the above * Using newer centralised workarounds for clang-tidy issue 58992 * Remove duplicate definition of FLOW_MAX as maximum flow type and maximum number of tracked flows * Rebased on newer versions of preliminary work (ICMP, flow based dispatch and allocation, bind/address cleanups) * Unified hash table as well as base flow table * Integrated ICMP Changes since v1: * Terminology changes - "Endpoint" address/port instead of "correspondent" address/port - "flowside" instead of "demiflow" * Actually move the connection table to a new flow table structure in new files * Significant rearrangement of earlier patchs on top of that new table, to reduce churn David Gibson (19): flow: Clarify and enforce flow state transitions flow: Make side 0 always be the initiating side flow: Record the pifs for each side of each flow tcp: Remove interim 'tapside' field from connection flow: Common data structures for tracking flow addresses flow: Populate address information for initiating side flow: Populate address information for non-initiating side tcp, flow: Remove redundant information, repack connection structures tcp: Obtain guest address from flowside tcp: Simplify endpoint validation using flowside information tcp_splice: Eliminate SPLICE_V6 flag tcp, flow: Replace TCP specific hash function with general flow hash flow, tcp: Generalise TCP hash table to general flow hash table tcp: Re-use flow hash for initial sequence number generation icmp: Use flowsides as the source of truth wherever possible icmp: Look up ping flows using flow hash icmp: Eliminate icmp_id_map flow, tcp: Flow based NAT and port forwarding for TCP flow, icmp: Use general flow forwarding rules for ICMP flow.c | 538 +++++++++++++++++++++++++++++++++++++++++++++------ flow.h | 149 +++++++++++++- flow_table.h | 21 ++ fwd.c | 110 +++++++++++ fwd.h | 12 ++ icmp.c | 98 ++++++---- icmp_flow.h | 1 - inany.h | 29 ++- passt.h | 3 + pif.h | 1 - tap.c | 11 -- tap.h | 1 - tcp.c | 484 ++++++++++++--------------------------------- tcp_conn.h | 36 ++-- tcp_splice.c | 97 ++-------- tcp_splice.h | 5 +- 16 files changed, 999 insertions(+), 597 deletions(-) -- 2.45.0
Flows move over several different states in their lifetime. The rules for
these are documented in comments, but they're pretty complex and a number
of the transitions are implicit, which makes this pretty fragile and
error prone.
Change the code to explicitly track the states in a field. Make all
transitions explicit and logged. To the extent that it's practical in C,
enforce what can and can't be done in various states with ASSERT()s.
While we're at it, tweak the docs to clarify the restrictions on each state
a bit.
Signed-off-by: David Gibson
On Tue, 14 May 2024 11:03:19 +1000
David Gibson
Flows move over several different states in their lifetime. The rules for these are documented in comments, but they're pretty complex and a number of the transitions are implicit, which makes this pretty fragile and error prone.
Change the code to explicitly track the states in a field. Make all transitions explicit and logged. To the extent that it's practical in C, enforce what can and can't be done in various states with ASSERT()s.
While we're at it, tweak the docs to clarify the restrictions on each state a bit.
Now it looks much clearer to me.
Signed-off-by: David Gibson
--- flow.c | 144 ++++++++++++++++++++++++++++++--------------------- flow.h | 67 ++++++++++++++++++++++-- flow_table.h | 10 ++++ icmp.c | 4 +- tcp.c | 8 ++- tcp_splice.c | 4 +- 6 files changed, 168 insertions(+), 69 deletions(-) diff --git a/flow.c b/flow.c index 80dd269..768e0f6 100644 --- a/flow.c +++ b/flow.c @@ -18,6 +18,15 @@ #include "flow.h" #include "flow_table.h"
+const char *flow_state_str[] = { + [FLOW_STATE_FREE] = "FREE", + [FLOW_STATE_NEW] = "NEW", + [FLOW_STATE_TYPED] = "TYPED", + [FLOW_STATE_ACTIVE] = "ACTIVE", +}; +static_assert(ARRAY_SIZE(flow_state_str) == FLOW_NUM_STATES, + "flow_state_str[] doesn't match enum flow_state"); + const char *flow_type_str[] = { [FLOW_TYPE_NONE] = "<none>", [FLOW_TCP] = "TCP connection", @@ -39,46 +48,6 @@ static_assert(ARRAY_SIZE(flow_proto) == FLOW_NUM_TYPES,
/* Global Flow Table */
-/** - * DOC: Theory of Operation - flow entry life cycle - * - * An individual flow table entry moves through these logical states, usually in - * this order. - * - * FREE - Part of the general pool of free flow table entries - * Operations: - * - flow_alloc() finds an entry and moves it to ALLOC state - * - * ALLOC - A tentatively allocated entry - * Operations: - * - flow_alloc_cancel() returns the entry to FREE state - * - FLOW_START() set the entry's type and moves to START state - * Caveats: - * - It's not safe to write fields in the flow entry - * - It's not safe to allocate further entries with flow_alloc() - * - It's not safe to return to the main epoll loop (use FLOW_START() - * to move to START state before doing so) - * - It's not safe to use flow_*() logging functions - * - * START - An entry being prepared by flow type specific code - * Operations: - * - Flow type specific fields may be accessed - * - flow_*() logging functions - * - flow_alloc_cancel() returns the entry to FREE state - * Caveats: - * - Returning to the main epoll loop or allocating another entry - * with flow_alloc() implicitly moves the entry to ACTIVE state. - * - * ACTIVE - An active flow entry managed by flow type specific code - * Operations: - * - Flow type specific fields may be accessed - * - flow_*() logging functions - * - Flow may be expired by returning 'true' from flow type specific - * deferred or timer handler. This will return it to FREE state. - * Caveats: - * - It's not safe to call flow_alloc_cancel() - */ - /** * DOC: Theory of Operation - allocating and freeing flow entries * @@ -132,6 +101,7 @@ static_assert(ARRAY_SIZE(flow_proto) == FLOW_NUM_TYPES,
unsigned flow_first_free; union flow flowtab[FLOW_MAX]; +static const union flow *flow_new_entry; /* = NULL */
/* Last time the flow timers ran */ static struct timespec flow_timer_run; @@ -144,6 +114,7 @@ static struct timespec flow_timer_run; */ void flow_log_(const struct flow_common *f, int pri, const char *fmt, ...) { + const char *typestate;
type_or_state? It took me a while to figure this out (well, because I didn't read the rest, my bad, but still it could be clearer).
char msg[BUFSIZ]; va_list args;
@@ -151,40 +122,65 @@ void flow_log_(const struct flow_common *f, int pri, const char *fmt, ...) (void)vsnprintf(msg, sizeof(msg), fmt, args); va_end(args);
- logmsg(pri, "Flow %u (%s): %s", flow_idx(f), FLOW_TYPE(f), msg); + /* Show type if it's set, otherwise the state */ + if (f->state < FLOW_STATE_TYPED) + typestate = FLOW_STATE(f); + else + typestate = FLOW_TYPE(f); + + logmsg(pri, "Flow %u (%s): %s", flow_idx(f), typestate, msg); +} + +/** + * flow_set_state() - Change flow's state + * @f: Flow to update + * @state: New state + */ +static void flow_set_state(struct flow_common *f, enum flow_state state) +{ + uint8_t oldstate = f->state; + + ASSERT(state < FLOW_NUM_STATES); + ASSERT(oldstate < FLOW_NUM_STATES); + + f->state = state; + flow_log_(f, LOG_DEBUG, "%s -> %s", flow_state_str[oldstate], + FLOW_STATE(f)); }
/** - * flow_start() - Set flow type for new flow and log - * @flow: Flow to set type for + * flow_set_type() - Set type and mvoe to TYPED state
move
+ * @flow: Flow to change state
...for? Or Flow changing state?
* @type: Type for new flow * @iniside: Which side initiated the new flow * * Return: @flow - * - * Should be called before setting any flow type specific fields in the flow - * table entry. */ -union flow *flow_start(union flow *flow, enum flow_type type, - unsigned iniside) +union flow *flow_set_type(union flow *flow, enum flow_type type, + unsigned iniside) { + struct flow_common *f = &flow->f; + + ASSERT(type != FLOW_TYPE_NONE); + ASSERT(flow_new_entry == flow && f->state == FLOW_STATE_NEW); + ASSERT(f->type == FLOW_TYPE_NONE); + (void)iniside; - flow->f.type = type; - flow_dbg(flow, "START %s", flow_type_str[flow->f.type]); + f->type = type; + flow_set_state(f, FLOW_STATE_TYPED); return flow; }
/** - * flow_end() - Clear flow type for finished flow and log - * @flow: Flow to clear + * flow_activate() - Move flow to ACTIVE state + * @f: Flow to change state */ -static void flow_end(union flow *flow) +void flow_activate(struct flow_common *f) { - if (flow->f.type == FLOW_TYPE_NONE) - return; /* Nothing to do */ + ASSERT(&flow_new_entry->f == f && f->state == FLOW_STATE_TYPED);
- flow_dbg(flow, "END %s", flow_type_str[flow->f.type]); - flow->f.type = FLOW_TYPE_NONE; + flow_set_state(f, FLOW_STATE_ACTIVE); + flow_new_entry = NULL; }
/** @@ -196,9 +192,12 @@ union flow *flow_alloc(void) { union flow *flow = &flowtab[flow_first_free];
+ ASSERT(!flow_new_entry); + if (flow_first_free >= FLOW_MAX) return NULL;
+ ASSERT(flow->f.state == FLOW_STATE_FREE); ASSERT(flow->f.type == FLOW_TYPE_NONE); ASSERT(flow->free.n >= 1); ASSERT(flow_first_free + flow->free.n <= FLOW_MAX); @@ -221,7 +220,10 @@ union flow *flow_alloc(void) flow_first_free = flow->free.next; }
+ flow_new_entry = flow; memset(flow, 0, sizeof(*flow)); + flow_set_state(&flow->f, FLOW_STATE_NEW); + return flow; }
@@ -233,15 +235,21 @@ union flow *flow_alloc(void) */ void flow_alloc_cancel(union flow *flow) { + ASSERT(flow_new_entry == flow); + ASSERT(flow->f.state == FLOW_STATE_NEW || + flow->f.state == FLOW_STATE_TYPED); ASSERT(flow_first_free > FLOW_IDX(flow));
- flow_end(flow); + flow_set_state(&flow->f, FLOW_STATE_FREE); + memset(flow, 0, sizeof(*flow)); + /* Put it back in a length 1 free cluster, don't attempt to fully * reverse flow_alloc()s steps. This will get folded together the next * time flow_defer_handler runs anyway() */ flow->free.n = 1; flow->free.next = flow_first_free; flow_first_free = FLOW_IDX(flow); + flow_new_entry = NULL; }
/** @@ -265,7 +273,8 @@ void flow_defer_handler(const struct ctx *c, const struct timespec *now) union flow *flow = &flowtab[idx]; bool closed = false;
- if (flow->f.type == FLOW_TYPE_NONE) { + switch (flow->f.state) { + case FLOW_STATE_FREE: { unsigned skip = flow->free.n;
/* First entry of a free cluster must have n >= 1 */ @@ -287,6 +296,20 @@ void flow_defer_handler(const struct ctx *c, const struct timespec *now) continue; }
+ case FLOW_STATE_NEW: + case FLOW_STATE_TYPED: + flow_err(flow, "Incomplete flow at end of cycle"); + ASSERT(false); + break; + + case FLOW_STATE_ACTIVE: + /* Nothing to do */ + break; + + default: + ASSERT(false); + } + switch (flow->f.type) { case FLOW_TYPE_NONE: ASSERT(false); @@ -310,7 +333,8 @@ void flow_defer_handler(const struct ctx *c, const struct timespec *now) }
if (closed) { - flow_end(flow); + flow_set_state(&flow->f, FLOW_STATE_FREE); + memset(flow, 0, sizeof(*flow));
if (free_head) { /* Add slot to current free cluster */ diff --git a/flow.h b/flow.h index c943c44..073a734 100644 --- a/flow.h +++ b/flow.h @@ -9,6 +9,66 @@
#define FLOW_TIMER_INTERVAL 1000 /* ms */
+/** + * enum flow_state - States of a flow table entry + * + * An individual flow table entry moves through these states, usually in this + * order. + * General rules: + * - Code outside flow.c should never write common fields of union flow. + * - The state field may always be read. + * + * FREE - Part of the general pool of free flow table entries + * Operations: + * - flow_alloc() finds an entry and moves it to NEW state
even s/ state// (same below) maybe? It's a bit redundant. No strong preference though.
+ * + * NEW - Freshly allocated, uninitialised entry + * Operations: + * - flow_alloc_cancel() returns the entry to FREE state + * - FLOW_SET_TYPE() sets the entry's type and moves to TYPED state + * Caveats: + * - No fields other than state may be accessed.
s/\.//
+ * - At most one entry may be in NEW or TYPED state at a time, so it's + * unsafe to use flow_alloc() again until this entry moves to + * ACTIVE or FREE state + * - You may not return to the main epoll loop while an entry is in + * NEW state. + * + * TYPED - Generic info initialised, type specific initialisation underway + * Operations: + * - All common fields may be read + * - Type specific fields may be read and written + * - flow_alloc_cancel() returns the entry to FREE state + * - FLOW_ACTIVATE() moves the entry to ACTIVE STATE
s/STATE/state/ (if you want to keep it)
+ * Caveats: + * - At most one entry may be in NEW or TYPED state at a time, so it's + * unsafe to use flow_alloc() again until this entry moves to + * ACTIVE or FREE state + * - You may not return to the main epoll loop while an entry is in + * TYPED state. + * + * ACTIVE - An active, fully-initialised flow entry + * Operations: + * - All common fields may be read + * - Type specific fields may be read and written + * - Flow may be expired by returning 'true' from flow type specific
'to expire' in this sense is actually intransitive. What you mean is perfectly clear after reading this a couple of times, but it might confuse non-native English speakers I guess?
+ * deferred or timer handler. This will return it to FREE state. + * Caveats: + * - flow_alloc_cancel() may not be called on it + */ +enum flow_state { + FLOW_STATE_FREE, + FLOW_STATE_NEW, + FLOW_STATE_TYPED, + FLOW_STATE_ACTIVE, + + FLOW_NUM_STATES, +}; + +extern const char *flow_state_str[]; +#define FLOW_STATE(f) \ + ((f)->state < FLOW_NUM_STATES ? flow_state_str[(f)->state] : "?") + /** * enum flow_type - Different types of packet flows we track */ @@ -37,9 +97,11 @@ extern const uint8_t flow_proto[];
/** * struct flow_common - Common fields for packet flows + * @state: State of the flow table entry * @type: Type of packet flow */ struct flow_common { + uint8_t state;
In this case, I would typically do (https://seitan.rocks/seitan/tree/common/gluten.h?id=5a9302bab9c9bb3d1577f046...): #ifdef __GNUC__ enum flow_state state:8; #else uint8_t state; #endif ...and in any case we need to make sure to assign single values in the enum above: there are no guarantees that FLOW_STATE_ACTIVE is 3 otherwise (except for that static_assert(), but that's not its purpose).
uint8_t type; };
@@ -49,11 +111,6 @@ struct flow_common { #define FLOW_TABLE_PRESSURE 30 /* % of FLOW_MAX */ #define FLOW_FILE_PRESSURE 30 /* % of c->nofile */
-union flow *flow_start(union flow *flow, enum flow_type type, - unsigned iniside); -#define FLOW_START(flow_, t_, var_, i_) \ - (&flow_start((flow_), (t_), (i_))->var_) - /** * struct flow_sidx - ID for one side of a specific flow * @side: Side referenced (0 or 1) diff --git a/flow_table.h b/flow_table.h index b7e5529..58014d8 100644 --- a/flow_table.h +++ b/flow_table.h @@ -107,4 +107,14 @@ static inline flow_sidx_t flow_sidx(const struct flow_common *f, union flow *flow_alloc(void); void flow_alloc_cancel(union flow *flow);
+union flow *flow_set_type(union flow *flow, enum flow_type type, + unsigned iniside); +#define FLOW_SET_TYPE(flow_, t_, var_, i_) \ + (&flow_set_type((flow_), (t_), (i_))->var_) + +void flow_activate(struct flow_common *f); +#define FLOW_ACTIVATE(flow_) \ + (flow_activate(&(flow_)->f)) + + #endif /* FLOW_TABLE_H */ diff --git a/icmp.c b/icmp.c index 1c5cf84..fda868d 100644 --- a/icmp.c +++ b/icmp.c @@ -167,7 +167,7 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, if (!flow) return NULL;
- pingf = FLOW_START(flow, flowtype, ping, TAPSIDE); + pingf = FLOW_SET_TYPE(flow, flowtype, ping, TAPSIDE);
pingf->seq = -1; pingf->id = id; @@ -198,6 +198,8 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c,
*id_sock = pingf;
+ FLOW_ACTIVATE(pingf); + return pingf;
cancel: diff --git a/tcp.c b/tcp.c index 21d0af0..65208ca 100644 --- a/tcp.c +++ b/tcp.c @@ -2006,7 +2006,7 @@ static void tcp_conn_from_tap(struct ctx *c, sa_family_t af, goto cancel; }
- conn = FLOW_START(flow, FLOW_TCP, tcp, TAPSIDE); + conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp, TAPSIDE); conn->sock = s; conn->timer = -1; conn_event(c, conn, TAP_SYN_RCVD); @@ -2077,6 +2077,7 @@ static void tcp_conn_from_tap(struct ctx *c, sa_family_t af, }
tcp_epoll_ctl(c, conn); + FLOW_ACTIVATE(conn); return;
cancel: @@ -2724,7 +2725,8 @@ static void tcp_tap_conn_from_sock(struct ctx *c, in_port_t dstport, const union sockaddr_inany *sa, const struct timespec *now) { - struct tcp_tap_conn *conn = FLOW_START(flow, FLOW_TCP, tcp, SOCKSIDE); + struct tcp_tap_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp, + SOCKSIDE);
conn->sock = s; conn->timer = -1; @@ -2747,6 +2749,8 @@ static void tcp_tap_conn_from_sock(struct ctx *c, in_port_t dstport, conn_flag(c, conn, ACK_FROM_TAP_DUE);
tcp_get_sndbuf(conn); + + FLOW_ACTIVATE(conn); }
/** diff --git a/tcp_splice.c b/tcp_splice.c index 4c36b72..abe98a0 100644 --- a/tcp_splice.c +++ b/tcp_splice.c @@ -472,7 +472,7 @@ bool tcp_splice_conn_from_sock(const struct ctx *c, return false; }
- conn = FLOW_START(flow, FLOW_TCP_SPLICE, tcp_splice, 0); + conn = FLOW_SET_TYPE(flow, FLOW_TCP_SPLICE, tcp_splice, 0);
conn->flags = af == AF_INET ? 0 : SPLICE_V6; conn->s[0] = s0; @@ -486,6 +486,8 @@ bool tcp_splice_conn_from_sock(const struct ctx *c, if (tcp_splice_connect(c, conn, af, pif1, dstport)) conn_flag(c, conn, CLOSING);
+ FLOW_ACTIVATE(conn); + return true; }
Everything else looks good to me. -- Stefano
Each flow in the flow table has two sides, 0 and 1, representing the
two interfaces between which passt/pasta will forward data for that flow.
Which side is which is currently up to the protocol specific code: TCP
uses side 0 for the host/"sock" side and 1 for the guest/"tap" side, except
for spliced connections where it uses 0 for the initiating side and 1 for
the accepting side. ICMP also uses 0 for the host/"sock" side and 1 for
the guest/"tap" side, but in its case the latter is always also the
initiating side.
Make this generically consistent by always using side 0 for the initiating
side and 1 for the accepting side. This doesn't simplify a lot for now,
and arguably makes TCP slightly more complex, since we add an extra field
to the connection structure to record which is the guest facing side.
This is an interim change, which we'll be able to remove later.
Signed-off-by: David Gibson
On Tue, 14 May 2024 11:03:20 +1000
David Gibson
Each flow in the flow table has two sides, 0 and 1, representing the two interfaces between which passt/pasta will forward data for that flow. Which side is which is currently up to the protocol specific code: TCP uses side 0 for the host/"sock" side and 1 for the guest/"tap" side, except for spliced connections where it uses 0 for the initiating side and 1 for the accepting side. ICMP also uses 0 for the host/"sock" side and 1 for the guest/"tap" side, but in its case the latter is always also the initiating side.
Make this generically consistent by always using side 0 for the initiating side and 1 for the accepting side. This doesn't simplify a lot for now, and arguably makes TCP slightly more complex, since we add an extra field to the connection structure to record which is the guest facing side. This is an interim change, which we'll be able to remove later.
Signed-off-by: David Gibson
--- flow.c | 5 +---- flow.h | 5 +++++ flow_table.h | 6 ++---- icmp.c | 8 ++------ tcp.c | 19 ++++++++----------- tcp_conn.h | 3 ++- tcp_splice.c | 2 +- 7 files changed, 21 insertions(+), 27 deletions(-) diff --git a/flow.c b/flow.c index 768e0f6..7456021 100644 --- a/flow.c +++ b/flow.c @@ -152,12 +152,10 @@ static void flow_set_state(struct flow_common *f, enum flow_state state) * flow_set_type() - Set type and mvoe to TYPED state * @flow: Flow to change state * @type: Type for new flow - * @iniside: Which side initiated the new flow * * Return: @flow */ -union flow *flow_set_type(union flow *flow, enum flow_type type, - unsigned iniside) +union flow *flow_set_type(union flow *flow, enum flow_type type) { struct flow_common *f = &flow->f;
@@ -165,7 +163,6 @@ union flow *flow_set_type(union flow *flow, enum flow_type type, ASSERT(flow_new_entry == flow && f->state == FLOW_STATE_NEW); ASSERT(f->type == FLOW_TYPE_NONE);
- (void)iniside; f->type = type; flow_set_state(f, FLOW_STATE_TYPED); return flow; diff --git a/flow.h b/flow.h index 073a734..28169a8 100644 --- a/flow.h +++ b/flow.h @@ -95,6 +95,11 @@ extern const uint8_t flow_proto[]; #define FLOW_PROTO(f) \ ((f)->type < FLOW_NUM_TYPES ? flow_proto[(f)->type] : 0)
+#define SIDES 2 + +#define INISIDE 0 /* Initiating side */ +#define FWDSIDE 1 /* Forwarded side */ + /** * struct flow_common - Common fields for packet flows * @state: State of the flow table entry diff --git a/flow_table.h b/flow_table.h index 58014d8..7c98195 100644 --- a/flow_table.h +++ b/flow_table.h @@ -107,10 +107,8 @@ static inline flow_sidx_t flow_sidx(const struct flow_common *f, union flow *flow_alloc(void); void flow_alloc_cancel(union flow *flow);
-union flow *flow_set_type(union flow *flow, enum flow_type type, - unsigned iniside); -#define FLOW_SET_TYPE(flow_, t_, var_, i_) \ - (&flow_set_type((flow_), (t_), (i_))->var_) +union flow *flow_set_type(union flow *flow, enum flow_type type); +#define FLOW_SET_TYPE(flow_, t_, var_) (&flow_set_type((flow_), (t_))->var_)
void flow_activate(struct flow_common *f); #define FLOW_ACTIVATE(flow_) \ diff --git a/icmp.c b/icmp.c index fda868d..6df0989 100644 --- a/icmp.c +++ b/icmp.c @@ -45,10 +45,6 @@ #define ICMP_ECHO_TIMEOUT 60 /* s, timeout for ICMP socket activity */ #define ICMP_NUM_IDS (1U << 16)
-/* Sides of a flow as we use them for ping streams */ -#define SOCKSIDE 0 -#define TAPSIDE 1 - #define PINGF(idx) (&(FLOW(idx)->ping))
/* Indexed by ICMP echo identifier */ @@ -167,7 +163,7 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, if (!flow) return NULL;
- pingf = FLOW_SET_TYPE(flow, flowtype, ping, TAPSIDE); + pingf = FLOW_SET_TYPE(flow, flowtype, ping);
pingf->seq = -1; pingf->id = id; @@ -180,7 +176,7 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, bind_if = c->ip6.ifname_out; }
- ref.flowside = FLOW_SIDX(flow, SOCKSIDE); + ref.flowside = FLOW_SIDX(flow, FWDSIDE); pingf->sock = sock_l4(c, af, flow_proto[flowtype], bind_addr, bind_if, 0, ref.data);
diff --git a/tcp.c b/tcp.c index 65208ca..06401ba 100644 --- a/tcp.c +++ b/tcp.c @@ -303,10 +303,6 @@
#include "flow_table.h"
-/* Sides of a flow as we use them in "tap" connections */ -#define SOCKSIDE 0 -#define TAPSIDE 1 - #define TCP_FRAMES_MEM 128 #define TCP_FRAMES \ (c->mode == MODE_PASST ? TCP_FRAMES_MEM : 1) @@ -581,7 +577,7 @@ static int tcp_epoll_ctl(const struct ctx *c, struct tcp_tap_conn *conn) { int m = conn->in_epoll ? EPOLL_CTL_MOD : EPOLL_CTL_ADD; union epoll_ref ref = { .type = EPOLL_TYPE_TCP, .fd = conn->sock, - .flowside = FLOW_SIDX(conn, SOCKSIDE) }; + .flowside = FLOW_SIDX(conn, !conn->tapside), }; struct epoll_event ev = { .data.u64 = ref.u64 };
if (conn->events == CLOSED) { @@ -1134,7 +1130,7 @@ static uint64_t tcp_conn_hash(const struct ctx *c, static inline unsigned tcp_hash_probe(const struct ctx *c, const struct tcp_tap_conn *conn) { - flow_sidx_t sidx = FLOW_SIDX(conn, TAPSIDE); + flow_sidx_t sidx = FLOW_SIDX(conn, conn->tapside); unsigned b = tcp_conn_hash(c, conn) % TCP_HASH_TABLE_SIZE;
/* Linear probing */ @@ -1154,7 +1150,7 @@ static void tcp_hash_insert(const struct ctx *c, struct tcp_tap_conn *conn) { unsigned b = tcp_hash_probe(c, conn);
- tc_hash[b] = FLOW_SIDX(conn, TAPSIDE); + tc_hash[b] = FLOW_SIDX(conn, conn->tapside); flow_dbg(conn, "hash table insert: sock %i, bucket: %u", conn->sock, b); }
@@ -2006,7 +2002,8 @@ static void tcp_conn_from_tap(struct ctx *c, sa_family_t af, goto cancel; }
- conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp, TAPSIDE); + conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp); + conn->tapside = INISIDE; conn->sock = s; conn->timer = -1; conn_event(c, conn, TAP_SYN_RCVD); @@ -2725,9 +2722,9 @@ static void tcp_tap_conn_from_sock(struct ctx *c, in_port_t dstport, const union sockaddr_inany *sa, const struct timespec *now) { - struct tcp_tap_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp, - SOCKSIDE); + struct tcp_tap_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp);
+ conn->tapside = FWDSIDE; conn->sock = s; conn->timer = -1; conn->ws_to_tap = conn->ws_from_tap = 0; @@ -2884,7 +2881,7 @@ void tcp_sock_handler(struct ctx *c, union epoll_ref ref, uint32_t events) struct tcp_tap_conn *conn = CONN(ref.flowside.flow);
ASSERT(conn->f.type == FLOW_TCP); - ASSERT(ref.flowside.side == SOCKSIDE); + ASSERT(ref.flowside.side == !conn->tapside);
if (conn->events == CLOSED) return; diff --git a/tcp_conn.h b/tcp_conn.h index d280b22..5df0076 100644 --- a/tcp_conn.h +++ b/tcp_conn.h @@ -13,6 +13,7 @@ * struct tcp_tap_conn - Descriptor for a TCP connection (not spliced) * @f: Generic flow information * @in_epoll: Is the connection in the epoll set? + * @tapside: Which side of the flow faces the tap/guest interface * @tap_mss: MSS advertised by tap/guest, rounded to 2 ^ TCP_MSS_BITS * @sock: Socket descriptor number * @events: Connection events, implying connection states @@ -39,6 +40,7 @@ struct tcp_tap_conn { struct flow_common f;
bool in_epoll :1; + unsigned tapside :1;
This is a bit "far" from where the bit meaning is defined (flow.h). Perhaps, in the comment: * @tapside: Which side (INISIDE/FWDSIDE) corresponds to the tap/guest interface ? And this is almost too obvious to ask, but I'm not sure: why isn't this in flow_common? I guess we'll need it for all the protocols, eventually, right? Is it because otherwise we have 17 bits there?
#define TCP_RETRANS_BITS 3 unsigned int retrans :TCP_RETRANS_BITS; @@ -106,7 +108,6 @@ struct tcp_tap_conn { uint32_t seq_init_from_tap; };
-#define SIDES 2 /** * struct tcp_splice_conn - Descriptor for a spliced TCP connection * @f: Generic flow information diff --git a/tcp_splice.c b/tcp_splice.c index abe98a0..5da7021 100644 --- a/tcp_splice.c +++ b/tcp_splice.c @@ -472,7 +472,7 @@ bool tcp_splice_conn_from_sock(const struct ctx *c, return false; }
- conn = FLOW_SET_TYPE(flow, FLOW_TCP_SPLICE, tcp_splice, 0); + conn = FLOW_SET_TYPE(flow, FLOW_TCP_SPLICE, tcp_splice);
conn->flags = af == AF_INET ? 0 : SPLICE_V6; conn->s[0] = s0;
-- Stefano
Currently we have no generic information flows apart from the type and
state, everything else is specific to the flow type. Start introducing
generic flow information by recording the pifs which the flow connects.
To keep track of what information is valid, introduce new flow states:
INI for when the initiating side information is complete, and FWD for when
both sides information is complete. For now, these states seem like busy
work, but they'll become more important as we add more generic information.
Signed-off-by: David Gibson
We recently introduced this field to keep track of which side of a TCP flow
is the guest/tap facing one. Now that we generically record which pif each
side of each flow is connected to, we can easily derive that, and no longer
need to keep track of it explicitly.
Signed-off-by: David Gibson
Handling of each protocol needs some degree of tracking of the
addresses and ports at the end of each connection or flow. Sometimes
that's explicit (as in the guest visible addresses for TCP
connections), sometimes implicit (the bound and connected addresses of
sockets).
To allow more consistent handling across protocols we want to
uniformly track the address and port at each end of the connection.
Furthermore, because we allow port remapping, and we sometimes need to
apply NAT, the addresses and ports can be different as seen by the
guest/namespace and as by the host.
Introduce 'struct flowside' to keep track of address and port
information related to one side of a flow. Store two of these in the
common fields of a flow to track that information for both sides. For
now we just introduce the structure itself, later patches will
actually populate and use it.
Signed-off-by: David Gibson
This requires the address and port information for the initiating side be
populated when a flow enters INI state. Implement that for TCP and ICMP.
For now this leaves some information redundantly recorded in both generic
and type specific fields. We'll fix that in later patches.
Signed-off-by: David Gibson
This requires the address and port information for the forwarded (non
initiating) side to be populated when a flow enters FWD state. Implement
that for TCP and ICMP. For now this leaves some information redundantly
recorded in both generic and type specific fields. We'll fix that in later
patches.
For TCP we now use the information from the flow to construct the
destination socket address in both tcp_conn_from_tap() and
tcp_splice_connect().
Signed-off-by: David Gibson
Some information we explicitly store in the TCP connection is now
duplicated in the common flow structure. Access it from there instead, and
remove it from the TCP specific structure. With that done we can reorder
both the "tap" and "splice" TCP structures a bit to get better packing for
the new combined flow table entries.
Signed-off-by: David Gibson
Currently we always deliver inbound TCP packets to the guest's most
recent observed IP address. This has the odd side effect that if the
guest changes its IP address with active TCP connections we might
deliver packets from old connections to the new address. That won't
work; it will probably result in an RST from the guest. Worse, if the
guest added a new address but also retains the old one, then we could
break those old connections by redirecting them to the new address.
Now that we maintain flowside information, we have a record of the correct
guest side address and can just use it.
Signed-off-by: David Gibson
Now that we store all our endpoints in the flowside structure, use some
inany helpers to make validation of those endpoints simpler.
Signed-off-by: David Gibson
Since we're now constructing socket addresses based on information in the
flowside, we no longer need an explicit flag to tell if we're dealing with
an IPv4 or IPv6 connection. Hence, drop the now unused SPLICE_V6 flag.
As well as just simplifying the code, this allows for possible future
extensions where we could splice an IPv4 connection to an IPv6 connection
or vice versa.
Signed-off-by: David Gibson
Currently we match TCP packets received on the tap connection to a TCP
connection via a hash table based on the forwarding address and both
ports. We hope in future to allow for multiple guest side addresses, or
for multiple interfaces which means we may need to distinguish based on
the endpoint address and pif as well. We also want a unified hash table
to cover multiple protocols, not just TCP.
Replace the TCP specific hash function with one suitable for general flows,
or rather for one side of a general flow. This includes all the
information from struct flowside, plus the L4 protocol number.
Signed-off-by: David Gibson
Move the data structures and helper functions for the TCP hash table to
flow.c, making it a general hash table indexing sides of flows. This is
largely code motion and straightforward renames. There are two semantic
changes:
* flow_lookup_af() now needs to verify that the entry has a matching
protocol as well as matching addresses, ports and interface
* We double the size of the hash table, because it's now at least
theoretically possible for both sides of each flow to be hashed.
Signed-off-by: David Gibson
We generate TCP initial sequence numbers, when we need them, from a
hash of the source and destination addresses and ports, plus a
timestamp. Moments later, we generate another hash of the same
information plus some more to insert the connection into the flow hash
table.
With some tweaks to the flow_hash_insert() interface and changing the
order we can re-use that hash table hash for the initial sequence
number, rather than calculating another one. It won't generate
identical results, but that doesn't matter as long as the sequence
numbers are well scattered.
Signed-off-by: David Gibson
icmp_sock_handler() obtains the guest address from it's most recently
observed IP, and the ICMP id from the epoll reference. Both of these
can be obtained readily from the flow.
icmp_tap_handler() builds its socket address for sendto() directly
from the destination address supplied by the incoming tap packet.
This can instead be generated from the flow.
struct icmp_ping_flow contains a field for the ICMP id of the ping, but
this is now redundant, since the id is also stored as the "port" in the
common flowsides.
Using the flowsides as the common source of truth here prepares us for
allowing more flexible NAT and forwarding by properly initialising
that flowside information.
Signed-off-by: David Gibson
When we receive a ping packet from the tap interface, we currently locate
the correct flow entry (if present) using an anciliary data structure, the
icmp_id_map[] tables. However, we can look this up using the flow hash
table - that's what it's for.
Signed-off-by: David Gibson
With previous reworks the icmp_id_map data structure is now maintained, but
never used for anything. Eliminate it.
Signed-off-by: David Gibson
Currently the code to translate host side addresses and ports to guest side
addresses and ports, and vice versa, is scattered across the TCP code.
This includes both port redirection as controlled by the -t and -T options,
and our special case NAT controlled by the --no-map-gw option.
Gather this logic into fwd_from_*() functions for each input interface
in fwd.c which take protocol and address information for the initiating
side and generates the pif and address information for the forwarded side.
This performs any NAT or port forwarding needed.
We create a flow_forward() helper which applies those forwarding functions
as needed to automatically move a flow from INI to FWD state. For now we
leave the older flow_forward_af() function taking explicit addresses as
a transitional tool.
Signed-off-by: David Gibson
On Tue, 14 May 2024 11:03:36 +1000
David Gibson
Currently the code to translate host side addresses and ports to guest side addresses and ports, and vice versa, is scattered across the TCP code. This includes both port redirection as controlled by the -t and -T options, and our special case NAT controlled by the --no-map-gw option.
Gather this logic into fwd_from_*() functions for each input interface in fwd.c which take protocol and address information for the initiating side and generates the pif and address information for the forwarded side. This performs any NAT or port forwarding needed.
We create a flow_forward() helper which applies those forwarding functions as needed to automatically move a flow from INI to FWD state. For now we leave the older flow_forward_af() function taking explicit addresses as a transitional tool.
Signed-off-by: David Gibson
--- flow.c | 53 +++++++++++++++++++++++++ flow_table.h | 2 + fwd.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++++++ fwd.h | 12 ++++++ tcp.c | 102 +++++++++++++++-------------------------------- tcp_splice.c | 63 ++--------------------------- tcp_splice.h | 5 +-- 7 files changed, 213 insertions(+), 134 deletions(-) diff --git a/flow.c b/flow.c index 4942075..a6afe39 100644 --- a/flow.c +++ b/flow.c @@ -304,6 +304,59 @@ const struct flowside *flow_forward_af(union flow *flow, uint8_t pif, return fwd; }
+ +/** + * flow_forward() - Determine where flow should forward to, and move to FWD + * @c: Execution context + * @flow: Flow to forward + * @proto: Protocol + * + * Return: pointer to the forwarded flowside information + */ +const struct flowside *flow_forward(const struct ctx *c, union flow *flow, + uint8_t proto) +{ + char estr[INANY_ADDRSTRLEN], fstr[INANY_ADDRSTRLEN]; + struct flow_common *f = &flow->f; + const struct flowside *ini = &f->side[INISIDE]; + struct flowside *fwd = &f->side[FWDSIDE]; + uint8_t pif1 = PIF_NONE;
This could now be 'pif_fwd' / 'pif_tgt', right?
+ + ASSERT(flow_new_entry == flow && f->state == FLOW_STATE_INI); + ASSERT(f->type == FLOW_TYPE_NONE); + ASSERT(f->pif[INISIDE] != PIF_NONE && f->pif[FWDSIDE] == PIF_NONE); + ASSERT(flow->f.state == FLOW_STATE_INI); + + switch (f->pif[INISIDE]) { + case PIF_TAP: + pif1 = fwd_from_tap(c, proto, ini, fwd); + break; + + case PIF_SPLICE: + pif1 = fwd_from_splice(c, proto, ini, fwd); + break; + + case PIF_HOST: + pif1 = fwd_from_host(c, proto, ini, fwd); + break; + + default: + flow_err(flow, "No rules to forward %s [%s]:%hu -> [%s]:%hu", + pif_name(f->pif[INISIDE]), + inany_ntop(&ini->eaddr, estr, sizeof(estr)), + ini->eport, + inany_ntop(&ini->faddr, fstr, sizeof(fstr)), + ini->fport); + } + + if (pif1 == PIF_NONE) + return NULL; + + f->pif[FWDSIDE] = pif1; + flow_set_state(f, FLOW_STATE_FWD); + return fwd; +} + /** * flow_set_type() - Set type and move to TYPED state * @flow: Flow to change state diff --git a/flow_table.h b/flow_table.h index d17ffba..3ac0b8c 100644 --- a/flow_table.h +++ b/flow_table.h @@ -118,6 +118,8 @@ const struct flowside *flow_forward_af(union flow *flow, uint8_t pif, sa_family_t af, const void *saddr, in_port_t sport, const void *daddr, in_port_t dport); +const struct flowside *flow_forward(const struct ctx *c, union flow *flow, + uint8_t proto);
union flow *flow_set_type(union flow *flow, enum flow_type type); #define FLOW_SET_TYPE(flow_, t_, var_) (&flow_set_type((flow_), (t_))->var_) diff --git a/fwd.c b/fwd.c index b3d5a37..5fe2361 100644 --- a/fwd.c +++ b/fwd.c @@ -25,6 +25,7 @@ #include "fwd.h" #include "passt.h" #include "lineread.h" +#include "flow_table.h"
/* See enum in kernel's include/net/tcp_states.h */ #define UDP_LISTEN 0x07 @@ -154,3 +155,112 @@ void fwd_scan_ports_init(struct ctx *c) &c->tcp.fwd_out, &c->tcp.fwd_in); } } + +uint8_t fwd_from_tap(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b)
A function comment would be nice to have, albeit a bit redundant. Now 'a' and 'b' could also be called 'ini' and 'tgt' I guess?
+{ + (void)proto; + + b->eaddr = a->faddr; + b->eport = a->fport; + + if (!c->no_map_gw) { + struct in_addr *v4 = inany_v4(&b->eaddr); + + if (v4 && IN4_ARE_ADDR_EQUAL(v4, &c->ip4.gw)) + *v4 = in4addr_loopback; + if (IN6_ARE_ADDR_EQUAL(&b->eaddr, &c->ip6.gw)) + b->eaddr.a6 = in6addr_loopback;
I haven't tested this, but I'm a bit lost: I thought that in this case we would also set b->faddr here. Where does that happen?
+ } + + return PIF_HOST; +} + +uint8_t fwd_from_splice(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b) +{ + const struct in_addr *ae4 = inany_v4(&a->eaddr); + + if (!inany_is_loopback(&a->eaddr) || + (!inany_is_loopback(&a->faddr) && !inany_is_unspecified(&a->faddr))) { + char estr[INANY_ADDRSTRLEN], fstr[INANY_ADDRSTRLEN]; + + debug("Non loopback address on %s: [%s]:%hu -> [%s]:%hu", + pif_name(PIF_SPLICE), + inany_ntop(&a->eaddr, estr, sizeof(estr)), a->eport, + inany_ntop(&a->faddr, fstr, sizeof(fstr)), a->fport); + return PIF_NONE; + } + + if (ae4) + inany_from_af(&b->eaddr, AF_INET, &in4addr_loopback); + else + inany_from_af(&b->eaddr, AF_INET6, &in6addr_loopback); + + b->eport = a->fport; + + if (proto == IPPROTO_TCP) + b->eport += c->tcp.fwd_out.delta[b->eport]; + + return PIF_HOST; +} + +uint8_t fwd_from_host(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b) +{ + struct in_addr *bf4; + + if (c->mode == MODE_PASTA && inany_is_loopback(&a->eaddr) && + proto == IPPROTO_TCP) { + /* spliceable */
Before we conclude this, does f->pif[INISIDE] == PIF_HOST in the caller guarantee that inany_is_loopback(&a->faddr), too? If not, we shouldn't splice unless that's true as well.
+ b->faddr = a->eaddr; + + if (inany_v4(&a->eaddr)) + inany_from_af(&b->eaddr, AF_INET, &in4addr_loopback); + else + inany_from_af(&b->eaddr, AF_INET6, &in6addr_loopback); + b->eport = a->fport; + if (proto == IPPROTO_TCP) + b->eport += c->tcp.fwd_in.delta[b->eport]; + + return PIF_SPLICE; + } + + b->faddr = a->eaddr; + b->fport = a->eport; + + bf4 = inany_v4(&b->faddr); + + if (bf4) { + if (IN4_IS_ADDR_LOOPBACK(bf4) || + IN4_IS_ADDR_UNSPECIFIED(bf4) || + IN4_ARE_ADDR_EQUAL(bf4, &c->ip4.addr_seen)) + *bf4 = c->ip4.gw; + } else { + struct in6_addr *bf6 = &b->faddr.a6; + + if (IN6_IS_ADDR_LOOPBACK(bf6) || + IN6_ARE_ADDR_EQUAL(bf6, &c->ip6.addr_seen) || + IN6_ARE_ADDR_EQUAL(bf6, &c->ip6.addr)) { + if (IN6_IS_ADDR_LINKLOCAL(&c->ip6.gw)) + *bf6 = c->ip6.gw; + else + *bf6 = c->ip6.addr_ll; + } + } + + if (bf4) { + inany_from_af(&b->eaddr, AF_INET, &c->ip4.addr_seen); + } else { + if (IN6_IS_ADDR_LINKLOCAL(&b->faddr.a6)) + b->eaddr.a6 = c->ip6.addr_ll_seen; + else + b->eaddr.a6 = c->ip6.addr_seen; + } + + b->eport = a->fport; + if (proto == IPPROTO_TCP) + b->eport += c->tcp.fwd_in.delta[b->eport];
As we do this in any case, spliced or not spliced, I would find it less confusing to have these assignments in common, earlier (I just spent half an hour trying to figure out why you wouldn't set b->eport for the non-spliced case...).
+ + return PIF_TAP; +} diff --git a/fwd.h b/fwd.h index 41645d7..eefe0f0 100644 --- a/fwd.h +++ b/fwd.h @@ -7,6 +7,8 @@ #ifndef FWD_H #define FWD_H
+struct flowside; + /* Number of ports for both TCP and UDP */ #define NUM_PORTS (1U << 16)
@@ -42,4 +44,14 @@ void fwd_scan_ports_udp(struct fwd_ports *fwd, const struct fwd_ports *rev, const struct fwd_ports *tcp_rev); void fwd_scan_ports_init(struct ctx *c);
+uint8_t fwd_from_tap(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b); +uint8_t fwd_from_splice(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b); +uint8_t fwd_from_host(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b); + +bool fwd_nat_flow(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b); + #endif /* FWD_H */ diff --git a/tcp.c b/tcp.c index 91b8a46..7e08b53 100644 --- a/tcp.c +++ b/tcp.c @@ -1759,7 +1759,6 @@ static void tcp_conn_from_tap(struct ctx *c, sa_family_t af, in_port_t dstport = ntohs(th->dest); const struct flowside *ini, *fwd; struct tcp_tap_conn *conn; - union inany_addr dstaddr; /* FIXME: Avoid bulky temporary */ union sockaddr_inany sa; union flow *flow; int s = -1, mss; @@ -1782,22 +1781,18 @@ static void tcp_conn_from_tap(struct ctx *c, sa_family_t af, goto cancel; }
- if ((s = tcp_conn_sock(c, af)) < 0) + if (!(fwd = flow_forward(c, flow, IPPROTO_TCP))) goto cancel;
- dstaddr = ini->faddr; - - if (!c->no_map_gw) { - struct in_addr *v4 = inany_v4(&dstaddr); - - if (v4 && IN4_ARE_ADDR_EQUAL(v4, &c->ip4.gw)) - *v4 = in4addr_loopback; - if (IN6_ARE_ADDR_EQUAL(&dstaddr, &c->ip6.gw)) - dstaddr.a6 = in6addr_loopback; + if (flow->f.pif[FWDSIDE] != PIF_HOST) { + flow_err(flow, "No support for forwarding TCP from %s to %s", + pif_name(flow->f.pif[INISIDE]), + pif_name(flow->f.pif[FWDSIDE])); + goto cancel; }
- fwd = flow_forward_af(flow, PIF_HOST, AF_INET6, - &inany_any6, srcport, &dstaddr, dstport); + if ((s = tcp_conn_sock(c, af)) < 0) + goto cancel;
if (IN6_IS_ADDR_LINKLOCAL(&fwd->eaddr)) { struct sockaddr_in6 addr6_ll = { @@ -2479,70 +2474,21 @@ static void tcp_connect_finish(struct ctx *c, struct tcp_tap_conn *conn) conn_flag(c, conn, ACK_FROM_TAP_DUE); }
-/** - * tcp_snat_inbound() - Translate source address for inbound data if needed - * @c: Execution context - * @addr: Source address of inbound packet/connection - */ -static void tcp_snat_inbound(const struct ctx *c, union inany_addr *addr) -{ - struct in_addr *addr4 = inany_v4(addr); - - if (addr4) { - if (IN4_IS_ADDR_LOOPBACK(addr4) || - IN4_IS_ADDR_UNSPECIFIED(addr4) || - IN4_ARE_ADDR_EQUAL(addr4, &c->ip4.addr_seen)) - *addr4 = c->ip4.gw; - } else { - struct in6_addr *addr6 = &addr->a6; - - if (IN6_IS_ADDR_LOOPBACK(addr6) || - IN6_ARE_ADDR_EQUAL(addr6, &c->ip6.addr_seen) || - IN6_ARE_ADDR_EQUAL(addr6, &c->ip6.addr)) { - if (IN6_IS_ADDR_LINKLOCAL(&c->ip6.gw)) - *addr6 = c->ip6.gw; - else - *addr6 = c->ip6.addr_ll; - } - } -} - /** * tcp_tap_conn_from_sock() - Initialize state for non-spliced connection * @c: Execution context - * @dstport: Destination port for connection (host side) * @flow: flow to initialise * @s: Accepted socket * @sa: Peer socket address (from accept()) * @now: Current timestamp */ -static void tcp_tap_conn_from_sock(struct ctx *c, in_port_t dstport, - union flow *flow, int s, - const union sockaddr_inany *sa, +static void tcp_tap_conn_from_sock(struct ctx *c, union flow *flow, int s, const struct timespec *now) { - union inany_addr saddr, daddr; /* FIXME: avoid bulky temporaries */ - struct tcp_tap_conn *conn; - in_port_t srcport; + struct tcp_tap_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp); uint64_t hash;
- inany_from_sockaddr(&saddr, &srcport, sa); - tcp_snat_inbound(c, &saddr); - - if (inany_v4(&saddr)) { - inany_from_af(&daddr, AF_INET, &c->ip4.addr_seen); - } else { - if (IN6_IS_ADDR_LINKLOCAL(&saddr)) - daddr.a6 = c->ip6.addr_ll_seen; - else - daddr.a6 = c->ip6.addr_seen; - } - dstport += c->tcp.fwd_in.delta[dstport]; - - flow_forward_af(flow, PIF_TAP, AF_INET6, - &saddr, srcport, &daddr, dstport); - conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp); - +
Excess newline and tab.
conn->sock = s; conn->timer = -1; conn->ws_to_tap = conn->ws_from_tap = 0; @@ -2585,8 +2531,7 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref, if (s < 0) goto cancel;
- flow_initiate_sa(flow, ref.tcp_listen.pif, &sa, ref.tcp_listen.port); - ini = &flow->f.side[INISIDE]; + ini = flow_initiate_sa(flow, ref.tcp_listen.pif, &sa, ref.tcp_listen.port);
if (!inany_is_unicast(&ini->eaddr) || ini->eport == 0) { char str[INANY_ADDRSTRLEN]; @@ -2596,11 +2541,26 @@ void tcp_listen_handler(struct ctx *c, union epoll_ref ref, goto cancel; }
- if (tcp_splice_conn_from_sock(c, ref.tcp_listen.pif, - ref.tcp_listen.port, flow, s, &sa)) - return; + if (!flow_forward(c, flow, IPPROTO_TCP)) + goto cancel; + + switch (flow->f.pif[FWDSIDE]) { + case PIF_SPLICE: + case PIF_HOST: + tcp_splice_conn_from_sock(c, flow, s); + break; + + case PIF_TAP: + tcp_tap_conn_from_sock(c, flow, s, now); + break; + + default: + flow_err(flow, "No support for forwarding TCP from %s to %s", + pif_name(flow->f.pif[INISIDE]), + pif_name(flow->f.pif[FWDSIDE])); + goto cancel; + }
- tcp_tap_conn_from_sock(c, ref.tcp_listen.port, flow, s, &sa, now); return;
cancel: diff --git a/tcp_splice.c b/tcp_splice.c index aa92325..a0581f0 100644 --- a/tcp_splice.c +++ b/tcp_splice.c @@ -395,71 +395,18 @@ static int tcp_conn_sock_ns(const struct ctx *c, sa_family_t af) /** * tcp_splice_conn_from_sock() - Attempt to init state for a spliced connection * @c: Execution context - * @pif0: pif id of side 0 - * @dstport: Side 0 destination port of connection * @flow: flow to initialise * @s0: Accepted (side 0) socket * @sa: Peer address of connection * - * Return: true if able to create a spliced connection, false otherwise
Not related to this patch, but I think we should probably describe in the theory of operation for flows what's the threshold between calling flow_alloc_cancel() on a flow (which would imply returning something here, in case tcp_splice_connect() fails), and deferring that instead to a CLOSING state.
* #syscalls:pasta setsockopt */ -bool tcp_splice_conn_from_sock(const struct ctx *c, - uint8_t pif0, in_port_t dstport, - union flow *flow, int s0, - const union sockaddr_inany *sa) +void tcp_splice_conn_from_sock(const struct ctx *c, union flow *flow, int s0) { - struct tcp_splice_conn *conn; - union inany_addr src; - in_port_t srcport; - sa_family_t af; - uint8_t pif1; + struct tcp_splice_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP_SPLICE, + tcp_splice);
- if (c->mode != MODE_PASTA) - return false; - - inany_from_sockaddr(&src, &srcport, sa); - af = inany_v4(&src) ? AF_INET : AF_INET6; - - switch (pif0) { - case PIF_SPLICE: - if (!inany_is_loopback(&src)) { - char str[INANY_ADDRSTRLEN]; - - /* We can't use flow_err() etc. because we haven't set - * the flow type yet - */ - warn("Bad source address %s for splice, closing", - inany_ntop(&src, str, sizeof(str))); - - /* We *don't* want to fall back to tap */ - flow_alloc_cancel(flow); - return true; - } - - pif1 = PIF_HOST; - dstport += c->tcp.fwd_out.delta[dstport]; - break; - - case PIF_HOST: - if (!inany_is_loopback(&src)) - return false; - - pif1 = PIF_SPLICE; - dstport += c->tcp.fwd_in.delta[dstport]; - break; - - default: - return false; - } - - if (af == AF_INET) - flow_forward_af(flow, pif1, AF_INET, - NULL, 0, &in4addr_loopback, dstport); - else - flow_forward_af(flow, pif1, AF_INET6, - NULL, 0, &in6addr_loopback, dstport); - conn = FLOW_SET_TYPE(flow, FLOW_TCP_SPLICE, tcp_splice); + ASSERT(c->mode == MODE_PASTA);
conn->s[0] = s0; conn->s[1] = -1; @@ -473,8 +420,6 @@ bool tcp_splice_conn_from_sock(const struct ctx *c, conn_flag(c, conn, CLOSING);
FLOW_ACTIVATE(conn); - - return true; }
/** diff --git a/tcp_splice.h b/tcp_splice.h index ed8f0c5..a20f3e2 100644 --- a/tcp_splice.h +++ b/tcp_splice.h @@ -11,10 +11,7 @@ union sockaddr_inany;
void tcp_splice_sock_handler(struct ctx *c, union epoll_ref ref, uint32_t events); -bool tcp_splice_conn_from_sock(const struct ctx *c, - uint8_t pif0, in_port_t dstport, - union flow *flow, int s0, - const union sockaddr_inany *sa); +void tcp_splice_conn_from_sock(const struct ctx *c, union flow *flow, int s0); void tcp_splice_init(struct ctx *c);
#endif /* TCP_SPLICE_H */
-- Stefano
On Sat, May 18, 2024 at 12:13:45AM +0200, Stefano Brivio wrote:
On Tue, 14 May 2024 11:03:36 +1000 David Gibson
wrote: Currently the code to translate host side addresses and ports to guest side addresses and ports, and vice versa, is scattered across the TCP code. This includes both port redirection as controlled by the -t and -T options, and our special case NAT controlled by the --no-map-gw option.
Gather this logic into fwd_from_*() functions for each input interface in fwd.c which take protocol and address information for the initiating side and generates the pif and address information for the forwarded side. This performs any NAT or port forwarding needed.
We create a flow_forward() helper which applies those forwarding functions as needed to automatically move a flow from INI to FWD state. For now we leave the older flow_forward_af() function taking explicit addresses as a transitional tool.
Signed-off-by: David Gibson
--- flow.c | 53 +++++++++++++++++++++++++ flow_table.h | 2 + fwd.c | 110 +++++++++++++++++++++++++++++++++++++++++++++++++++ fwd.h | 12 ++++++ tcp.c | 102 +++++++++++++++-------------------------------- tcp_splice.c | 63 ++--------------------------- tcp_splice.h | 5 +-- 7 files changed, 213 insertions(+), 134 deletions(-) diff --git a/flow.c b/flow.c index 4942075..a6afe39 100644 --- a/flow.c +++ b/flow.c @@ -304,6 +304,59 @@ const struct flowside *flow_forward_af(union flow *flow, uint8_t pif, return fwd; }
+ +/** + * flow_forward() - Determine where flow should forward to, and move to FWD + * @c: Execution context + * @flow: Flow to forward + * @proto: Protocol + * + * Return: pointer to the forwarded flowside information + */ +const struct flowside *flow_forward(const struct ctx *c, union flow *flow, + uint8_t proto) +{ + char estr[INANY_ADDRSTRLEN], fstr[INANY_ADDRSTRLEN]; + struct flow_common *f = &flow->f; + const struct flowside *ini = &f->side[INISIDE]; + struct flowside *fwd = &f->side[FWDSIDE]; + uint8_t pif1 = PIF_NONE;
This could now be 'pif_fwd' / 'pif_tgt', right?
Good idea, changed. [snip]
diff --git a/fwd.c b/fwd.c index b3d5a37..5fe2361 100644 --- a/fwd.c +++ b/fwd.c @@ -25,6 +25,7 @@ #include "fwd.h" #include "passt.h" #include "lineread.h" +#include "flow_table.h"
/* See enum in kernel's include/net/tcp_states.h */ #define UDP_LISTEN 0x07 @@ -154,3 +155,112 @@ void fwd_scan_ports_init(struct ctx *c) &c->tcp.fwd_out, &c->tcp.fwd_in); } } + +uint8_t fwd_from_tap(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b)
A function comment would be nice to have, albeit a bit redundant.
Ah, yes. I meant to go back and add these, but obviously forgot. Fixed now.
Now 'a' and 'b' could also be called 'ini' and 'tgt' I guess?
Also a good idea, done.
+{ + (void)proto; + + b->eaddr = a->faddr; + b->eport = a->fport; + + if (!c->no_map_gw) { + struct in_addr *v4 = inany_v4(&b->eaddr); + + if (v4 && IN4_ARE_ADDR_EQUAL(v4, &c->ip4.gw)) + *v4 = in4addr_loopback; + if (IN6_ARE_ADDR_EQUAL(&b->eaddr, &c->ip6.gw)) + b->eaddr.a6 = in6addr_loopback;
I haven't tested this, but I'm a bit lost: I thought that in this case we would also set b->faddr here. Where does that happen?
Ah.. right. So notionally we should set tgt->faddr here. However, because in this case we're forwarding to PIF_HOST we don't actually know tgt->faddr (or tgt->fport) without a getsockname() call, so we're leaving them blank. They will, in fact, be blank because we zero the entire entry in flow_alloc(). That's pretty non-obvious though, I'll change this to explicitly set faddr and fport with a comment.
+ } + + return PIF_HOST; +} + +uint8_t fwd_from_splice(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b) +{ + const struct in_addr *ae4 = inany_v4(&a->eaddr); + + if (!inany_is_loopback(&a->eaddr) || + (!inany_is_loopback(&a->faddr) && !inany_is_unspecified(&a->faddr))) { + char estr[INANY_ADDRSTRLEN], fstr[INANY_ADDRSTRLEN]; + + debug("Non loopback address on %s: [%s]:%hu -> [%s]:%hu", + pif_name(PIF_SPLICE), + inany_ntop(&a->eaddr, estr, sizeof(estr)), a->eport, + inany_ntop(&a->faddr, fstr, sizeof(fstr)), a->fport); + return PIF_NONE; + } + + if (ae4) + inany_from_af(&b->eaddr, AF_INET, &in4addr_loopback); + else + inany_from_af(&b->eaddr, AF_INET6, &in6addr_loopback); + + b->eport = a->fport; + + if (proto == IPPROTO_TCP) + b->eport += c->tcp.fwd_out.delta[b->eport]; + + return PIF_HOST; +} + +uint8_t fwd_from_host(const struct ctx *c, uint8_t proto, + const struct flowside *a, struct flowside *b) +{ + struct in_addr *bf4; + + if (c->mode == MODE_PASTA && inany_is_loopback(&a->eaddr) && + proto == IPPROTO_TCP) { + /* spliceable */
Before we conclude this, does f->pif[INISIDE] == PIF_HOST in the caller guarantee that inany_is_loopback(&a->faddr), too?
Only in the sense that if we accept()ed a connection from a loopback address on a socket not bound to a loopback address (or ANY), then the kernel has done something wrong. This kind of has the inverse of the issue above: we don't necessarily know the forwarding address here - we only know that with either a getsockname(), or by looking at the bound address of the listening socket (which might be unspecified).
If not, we shouldn't splice unless that's true as well.
So I'm pretty confident what we do here is equivalent to what we did before. That might not be correct, but fixing that is for a different patch. Making problems like that more obvious is one of the advantages I expect for gathering all this forwarding logic into one place.
+ b->faddr = a->eaddr; + + if (inany_v4(&a->eaddr)) + inany_from_af(&b->eaddr, AF_INET, &in4addr_loopback); + else + inany_from_af(&b->eaddr, AF_INET6, &in6addr_loopback); + b->eport = a->fport; + if (proto == IPPROTO_TCP) + b->eport += c->tcp.fwd_in.delta[b->eport]; + + return PIF_SPLICE; + } + + b->faddr = a->eaddr; + b->fport = a->eport; + + bf4 = inany_v4(&b->faddr); + + if (bf4) { + if (IN4_IS_ADDR_LOOPBACK(bf4) || + IN4_IS_ADDR_UNSPECIFIED(bf4) || + IN4_ARE_ADDR_EQUAL(bf4, &c->ip4.addr_seen)) + *bf4 = c->ip4.gw; + } else { + struct in6_addr *bf6 = &b->faddr.a6; + + if (IN6_IS_ADDR_LOOPBACK(bf6) || + IN6_ARE_ADDR_EQUAL(bf6, &c->ip6.addr_seen) || + IN6_ARE_ADDR_EQUAL(bf6, &c->ip6.addr)) { + if (IN6_IS_ADDR_LINKLOCAL(&c->ip6.gw)) + *bf6 = c->ip6.gw; + else + *bf6 = c->ip6.addr_ll; + } + } + + if (bf4) { + inany_from_af(&b->eaddr, AF_INET, &c->ip4.addr_seen); + } else { + if (IN6_IS_ADDR_LINKLOCAL(&b->faddr.a6)) + b->eaddr.a6 = c->ip6.addr_ll_seen; + else + b->eaddr.a6 = c->ip6.addr_seen; + } + + b->eport = a->fport; + if (proto == IPPROTO_TCP) + b->eport += c->tcp.fwd_in.delta[b->eport];
As we do this in any case, spliced or not spliced, I would find it less confusing to have these assignments in common, earlier (I just spent half an hour trying to figure out why you wouldn't set b->eport for the non-spliced case...).
Fair point. This was just because I thought my way through the two cases separately. I've made this stanza common. [snip]
+static void tcp_tap_conn_from_sock(struct ctx *c, union flow *flow, int s, const struct timespec *now) { - union inany_addr saddr, daddr; /* FIXME: avoid bulky temporaries */ - struct tcp_tap_conn *conn; - in_port_t srcport; + struct tcp_tap_conn *conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp); uint64_t hash;
- inany_from_sockaddr(&saddr, &srcport, sa); - tcp_snat_inbound(c, &saddr); - - if (inany_v4(&saddr)) { - inany_from_af(&daddr, AF_INET, &c->ip4.addr_seen); - } else { - if (IN6_IS_ADDR_LINKLOCAL(&saddr)) - daddr.a6 = c->ip6.addr_ll_seen; - else - daddr.a6 = c->ip6.addr_seen; - } - dstport += c->tcp.fwd_in.delta[dstport]; - - flow_forward_af(flow, PIF_TAP, AF_INET6, - &saddr, srcport, &daddr, dstport); - conn = FLOW_SET_TYPE(flow, FLOW_TCP, tcp); - +
Excess newline and tab.
Looks like I already fixed that. [snip]
--- a/tcp_splice.c +++ b/tcp_splice.c @@ -395,71 +395,18 @@ static int tcp_conn_sock_ns(const struct ctx *c, sa_family_t af) /** * tcp_splice_conn_from_sock() - Attempt to init state for a spliced connection * @c: Execution context - * @pif0: pif id of side 0 - * @dstport: Side 0 destination port of connection * @flow: flow to initialise * @s0: Accepted (side 0) socket * @sa: Peer address of connection * - * Return: true if able to create a spliced connection, false otherwise
Not related to this patch, but I think we should probably describe in the theory of operation for flows what's the threshold between calling flow_alloc_cancel() on a flow (which would imply returning something here, in case tcp_splice_connect() fails), and deferring that instead to a CLOSING state.
That's included in the new descriptions of the flow states. There might be a way to make it more obvious, but I'm not immediately sure of it. In any case the answer is: you can't cancel once you FLOW_ACTIVATE(). [snip] -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson
Current ICMP hard codes its forwarding rules, and never applies any
translations. Change it to use the flow_forward() function, so that
it's translated the same as TCP (excluding TCP specific port
redirection).
This means that gw mapping now applies to ICMP so "ping <gw address>" will
now ping the host's loopback instead of the actual gw machine. This
removes the surprising behaviour that the target you ping might not be the
same as you connect to with TCP.
Signed-off-by: David Gibson
On Tue, 14 May 2024 11:03:37 +1000
David Gibson
Current ICMP hard codes its forwarding rules, and never applies any translations. Change it to use the flow_forward() function, so that it's translated the same as TCP (excluding TCP specific port redirection).
This means that gw mapping now applies to ICMP so "ping <gw address>" will now ping the host's loopback instead of the actual gw machine. This removes the surprising behaviour that the target you ping might not be the same as you connect to with TCP.
Signed-off-by: David Gibson
--- flow.c | 1 + icmp.c | 14 ++++++++++++-- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/flow.c b/flow.c index a6afe39..b43a079 100644 --- a/flow.c +++ b/flow.c @@ -285,6 +285,7 @@ const struct flowside *flow_initiate_sa(union flow *flow, uint8_t pif, * * Return: pointer to the forwarded flowside information */ +/* cppcheck-suppress unusedFunction */ const struct flowside *flow_forward_af(union flow *flow, uint8_t pif, sa_family_t af, const void *saddr, in_port_t sport, diff --git a/icmp.c b/icmp.c index 0112fd9..6310178 100644 --- a/icmp.c +++ b/icmp.c @@ -153,6 +153,7 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, sa_family_t af, uint16_t id, const void *saddr, const void *daddr) { + uint8_t proto = af == AF_INET ? IPPROTO_ICMP : IPPROTO_ICMPV6; uint8_t flowtype = af == AF_INET ? FLOW_PING4 : FLOW_PING6; union epoll_ref ref = { .type = EPOLL_TYPE_PING }; union flow *flow = flow_alloc(); @@ -163,9 +164,18 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, if (!flow) return NULL;
- flow_initiate_af(flow, PIF_TAP, af, saddr, id, daddr, id); - flow_forward_af(flow, PIF_HOST, af, NULL, 0, daddr, 0); + if (!flow_forward(c, flow, proto)) + goto cancel; + + if (flow->f.pif[FWDSIDE] != PIF_HOST) { + flow_err(flow, "No support for forwarding %s from %s to %s", + proto == IPPROTO_ICMP ? "ICMP" : "ICMPv6",
Which brings me to two remarks: - having the protocol name also in the flow_err() message printed in flow_forward() could be helpful - then, perhaps, we should re-introduce ip_proto_str[] which was dropped with 340164445341 ("epoll: Generalize epoll_ref to cover things other than sockets")
+ pif_name(flow->f.pif[INISIDE]), + pif_name(flow->f.pif[FWDSIDE])); + goto cancel; + } + pingf = FLOW_SET_TYPE(flow, flowtype, ping);
pingf->seq = -1;
-- Stefano
On Sat, May 18, 2024 at 12:14:08AM +0200, Stefano Brivio wrote:
On Tue, 14 May 2024 11:03:37 +1000 David Gibson
wrote: Current ICMP hard codes its forwarding rules, and never applies any translations. Change it to use the flow_forward() function, so that it's translated the same as TCP (excluding TCP specific port redirection).
This means that gw mapping now applies to ICMP so "ping <gw address>" will now ping the host's loopback instead of the actual gw machine. This removes the surprising behaviour that the target you ping might not be the same as you connect to with TCP.
Signed-off-by: David Gibson
--- flow.c | 1 + icmp.c | 14 ++++++++++++-- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/flow.c b/flow.c index a6afe39..b43a079 100644 --- a/flow.c +++ b/flow.c @@ -285,6 +285,7 @@ const struct flowside *flow_initiate_sa(union flow *flow, uint8_t pif, * * Return: pointer to the forwarded flowside information */ +/* cppcheck-suppress unusedFunction */ const struct flowside *flow_forward_af(union flow *flow, uint8_t pif, sa_family_t af, const void *saddr, in_port_t sport, diff --git a/icmp.c b/icmp.c index 0112fd9..6310178 100644 --- a/icmp.c +++ b/icmp.c @@ -153,6 +153,7 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, sa_family_t af, uint16_t id, const void *saddr, const void *daddr) { + uint8_t proto = af == AF_INET ? IPPROTO_ICMP : IPPROTO_ICMPV6; uint8_t flowtype = af == AF_INET ? FLOW_PING4 : FLOW_PING6; union epoll_ref ref = { .type = EPOLL_TYPE_PING }; union flow *flow = flow_alloc(); @@ -163,9 +164,18 @@ static struct icmp_ping_flow *icmp_ping_new(const struct ctx *c, if (!flow) return NULL;
- flow_initiate_af(flow, PIF_TAP, af, saddr, id, daddr, id); - flow_forward_af(flow, PIF_HOST, af, NULL, 0, daddr, 0); + if (!flow_forward(c, flow, proto)) + goto cancel; + + if (flow->f.pif[FWDSIDE] != PIF_HOST) { + flow_err(flow, "No support for forwarding %s from %s to %s", + proto == IPPROTO_ICMP ? "ICMP" : "ICMPv6",
Which brings me to two remarks:
- having the protocol name also in the flow_err() message printed in flow_forward() could be helpful
It would, and I've thought about it, but haven't seen a great way to go about it. * The flow type is not set at this point, so we can't use that. We can't trivially move setting the type earlier, because for TCP at least we need the information from flow_froward() to determine if we're spliced and set the type based on that. * Including both flow type and protocol in flow_common is annoyingly redundant, as well as adding a full 32-bits to the structure because of padding. * We could possibly eliminate flow type and make it implicit based on the protocol+pifs: regular TCP flow is (TCP, HOST, TAP) or (TCP, TAP, HOST), whereas TCP spliced is (TCP, HOST, SPLICE) or (TCP, SPLICE, HOST). Type is the selector field for which of the union variants is valid, and I don't love that being something with a kind of complicated calculation behind it. * We could make the type field hold the protocol until FLOW_SET_TYPE(), but I don't love semantics of a field changing like that. * We could just pass either a protocol number or a string to flow_forward() etc., but that seems a bit awkward. Hrm... actually thinking on that last one. It might make sense to add a descriptive string to flow_initiate(), not just the protocol but something like "TCP SYN" versus "TCP accept()" or the like. That wouldn't directly help flow_forward(), but the info line from flow_initiate() is likely to be in close proximity, so it would help.
- then, perhaps, we should re-introduce ip_proto_str[] which was dropped with 340164445341 ("epoll: Generalize epoll_ref to cover things other than sockets")
Maybe. I guess the standard way to do that is with getprotobyname(3), but that probably won't work in our isolated namespace, I guess. -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson
participants (2)
-
David Gibson
-
Stefano Brivio