On Tue, 14 Feb 2023 12:59:02 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:Hi, I've been running the tests for the first time since getting back from holiday. I seem to be getting a consistent failure on the first batch of IPv4 throughput tests, although the IPv6 tests before it worked fine: === perf/passt_tcpWhoa. I haven't, not even occasionally.passt: throughput and latencyThroughput in Gbps, latency in µs, one thread at 4.8 GHz, 8 streams MTU: | 256B | 576B | 1280B | 1500B | 9000B | 65520B | |--------|--------|--------|--------|--------|--------| TCP throughput over IPv6: guest to host | - | - | 6.5 | 7.6 | 20.7 | 28.1 | TCP RR latency over IPv6: guest to host | - | - | - | - | - | 74 | TCP CRR latency over IPv6: guest to host | - | - | - | - | - | 227 | |--------|--------|--------|--------|--------|--------| TCP throughput over IPv4: guest to host | 0 | 0 | 0 | 0 | 0 | 0 | TCP RR latency over IPv4: guest to host | - | - | - | - | - | I haven't seen a failure like this before, has anyone else seen something like this?I'm not getting any messages from the iperf client at all - like it has no connectivity whatsoever, but the non-perf tests are all passing.The only obvious difference I see are port numbers: that would be port 10003 for functionality tests, and 10002 for throughput tests. Maybe you have something else already bound to it...? Or could it be due to the "new" virtio-net TX hang, fixed by kernel commit d71ebe8114b4 ("virtio-net: correctly enable callback during start_xmit")? The 256-byte MTU test is the one most likely to trigger that condition... -- Stefano