On 2026-04-16 12:16, Laurent Vivier wrote:
Previously, tcp_vu_sock_recv() assumed a 1:1 mapping between virtqueue elements and iovecs (one iovec per element), enforced by an ASSERT. This prevented the use of virtqueue elements with multiple buffers (e.g. when mergeable rx buffers are not negotiated and headers are provided in a separate buffer).
[...]
- if (dlen > len) - dlen = len; - len -= dlen; + dlen = frame[i].size - hdrlen;
/* The IPv4 header checksum varies only with dlen */ if (previous_dlen != dlen) check |= IP4_CSUM; previous_dlen = dlen;
- tcp_vu_prepare(c, conn, iov, buf_cnt, dlen, &check, push); + tcp_vu_prepare(c, conn, iov, iov_cnt, dlen, &check, push);
- vu_pad(elem[head[i]].in_sg, buf_cnt, dlen + hdrlen); - vu_flush(vdev, vq, &elem[head[i]], buf_cnt, dlen + hdrlen); + vu_pad(&iov[frame[i].idx_iovec], frame[i].num_iovec, + dlen + hdrlen);
This doesn't look right. iov is already &iov_vu[frame[i].idx_iovec], set a few lines further up this patch. I suspect your intention is to access it the way you do in tcp_vu_prepare: vu_pad(iov, iov_cnt, dlen + hdrlen); /jon
if (*c->pcap) { - pcap_iov(iov, buf_cnt, VNET_HLEN, + pcap_iov(iov, iov_cnt, VNET_HLEN, dlen + hdrlen - VNET_HLEN); } + vu_flush(vdev, vq, &elem[frame[i].idx_element], + frame[i].num_element, dlen + hdrlen);
conn->seq_to_tap += dlen; }