[PATCH v3 0/3] Decouple iovec management from virtqueue elements
This series prepares the vhost-user path for multi-buffer support, where a single virtqueue element can use more than one iovec entry. Currently, iovec arrays are tightly coupled to virtqueue elements: callers must pre-initialize each element's in_sg/out_sg pointers before calling vu_queue_pop(), and each element is assumed to own exactly one iovec slot. This makes it impossible for a single element to span multiple iovec entries, which is needed for UDP multi-buffer reception. The series decouples iovec storage from elements in three patches: - Patch 1 passes iovec arrays as separate parameters to vu_queue_pop() and vu_queue_map_desc(), so the caller controls where descriptors are mapped rather than reading them from pre-initialized element fields. - Patch 2 passes the actual remaining out_sg capacity to vu_queue_pop() in vu_handle_tx() instead of a fixed per-element constant, enabling dynamic iovec allocation. - Patch 3 moves iovec pool management into vu_collect(), which now accepts the iovec array and tracks consumed entries across elements with a running counter. This removes vu_set_element() and vu_init_elem() entirely. Callers that still assume one iovec per element assert this invariant explicitly until they are updated for multi-buffer. The follow-up udp-iov_vu series builds on this to implement actual multi-buffer support in the UDP vhost-user path. v3: - rebase and add David's R-b - fix coding style (if) - rename in_num to in_total v2: - in patch 3, use iov_used in iov_truncate() rather than elem_cnt as vu_collect() is now providing the number of iovec collected. Laurent Vivier (3): virtio: Pass iovec arrays as separate parameters to vu_queue_pop() vu_handle_tx: Pass actual remaining out_sg capacity to vu_queue_pop() vu_common: Move iovec management into vu_collect() tcp_vu.c | 25 +++++++++------- udp_vu.c | 21 ++++++++------ virtio.c | 29 ++++++++++++++----- virtio.h | 4 ++- vu_common.c | 82 ++++++++++++++++++++++++----------------------------- vu_common.h | 22 ++------------ 6 files changed, 91 insertions(+), 92 deletions(-) -- 2.53.0
Previously, callers had to pre-initialize virtqueue elements with iovec
entries using vu_set_element() or vu_init_elem() before calling
vu_collect(). This meant each element owned a fixed, pre-assigned iovec
slot.
Move the iovec array into vu_collect() as explicit parameters (in_sg,
max_in_sg, and in_total), letting it pass the remaining iovec capacity
directly to vu_queue_pop(). A running current_iov counter tracks
consumed entries across elements, so multiple elements share a single
iovec pool. The optional in_total output parameter reports how many iovec
entries were consumed, allowing callers to track usage across multiple
vu_collect() calls.
This removes vu_set_element() and vu_init_elem() which are no longer
needed, and is a prerequisite for multi-buffer support where a single
virtqueue element can use more than one iovec entry. For now, callers
assert the current single-iovec-per-element invariant until they are
updated to handle multiple iovecs.
Signed-off-by: Laurent Vivier
In vu_handle_tx(), pass the actual remaining iovec capacity
(ARRAY_SIZE(out_sg) - out_sg_count) to vu_queue_pop() rather than a
fixed VU_MAX_TX_BUFFER_NB.
This enables dynamic allocation of iovec entries to each element rather
than reserving a fixed number of slots per descriptor.
Signed-off-by: Laurent Vivier
Currently vu_queue_pop() and vu_queue_map_desc() read the iovec arrays
(in_sg/out_sg) and their sizes (in_num/out_num) from the vu_virtq_element
struct. This couples the iovec storage to the element, requiring callers
like vu_handle_tx() to pre-initialize the element fields before calling
vu_queue_pop().
Pass the iovec arrays and their maximum sizes as separate parameters
instead. vu_queue_map_desc() now writes the actual descriptor count
and iovec pointers back into the element after mapping, rather than
using the element as both input and output.
This decouples the iovec storage from the element, which is a
prerequisite for multi-buffer support where a single frame can span
multiple virtqueue elements sharing a common iovec pool.
No functional change.
Signed-off-by: Laurent Vivier
On Wed, 18 Mar 2026 10:19:38 +0100
Laurent Vivier
This series prepares the vhost-user path for multi-buffer support, where a single virtqueue element can use more than one iovec entry.
Currently, iovec arrays are tightly coupled to virtqueue elements: callers must pre-initialize each element's in_sg/out_sg pointers before calling vu_queue_pop(), and each element is assumed to own exactly one iovec slot. This makes it impossible for a single element to span multiple iovec entries, which is needed for UDP multi-buffer reception.
The series decouples iovec storage from elements in three patches:
- Patch 1 passes iovec arrays as separate parameters to vu_queue_pop() and vu_queue_map_desc(), so the caller controls where descriptors are mapped rather than reading them from pre-initialized element fields.
- Patch 2 passes the actual remaining out_sg capacity to vu_queue_pop() in vu_handle_tx() instead of a fixed per-element constant, enabling dynamic iovec allocation.
- Patch 3 moves iovec pool management into vu_collect(), which now accepts the iovec array and tracks consumed entries across elements with a running counter. This removes vu_set_element() and vu_init_elem() entirely. Callers that still assume one iovec per element assert this invariant explicitly until they are updated for multi-buffer.
The follow-up udp-iov_vu series builds on this to implement actual multi-buffer support in the UDP vhost-user path.
v3: - rebase and add David's R-b - fix coding style (if) - rename in_num to in_total
Applied. I took the liberty to add David's Reviewed-by: back on 3/3 as the only change in 3/3 v3 compared to v2 was actually something he suggested. -- Stefano
On Fri, Mar 20, 2026 at 09:58:17PM +0100, Stefano Brivio wrote:
On Wed, 18 Mar 2026 10:19:38 +0100 Laurent Vivier
wrote: This series prepares the vhost-user path for multi-buffer support, where a single virtqueue element can use more than one iovec entry.
Currently, iovec arrays are tightly coupled to virtqueue elements: callers must pre-initialize each element's in_sg/out_sg pointers before calling vu_queue_pop(), and each element is assumed to own exactly one iovec slot. This makes it impossible for a single element to span multiple iovec entries, which is needed for UDP multi-buffer reception.
The series decouples iovec storage from elements in three patches:
- Patch 1 passes iovec arrays as separate parameters to vu_queue_pop() and vu_queue_map_desc(), so the caller controls where descriptors are mapped rather than reading them from pre-initialized element fields.
- Patch 2 passes the actual remaining out_sg capacity to vu_queue_pop() in vu_handle_tx() instead of a fixed per-element constant, enabling dynamic iovec allocation.
- Patch 3 moves iovec pool management into vu_collect(), which now accepts the iovec array and tracks consumed entries across elements with a running counter. This removes vu_set_element() and vu_init_elem() entirely. Callers that still assume one iovec per element assert this invariant explicitly until they are updated for multi-buffer.
The follow-up udp-iov_vu series builds on this to implement actual multi-buffer support in the UDP vhost-user path.
v3: - rebase and add David's R-b - fix coding style (if) - rename in_num to in_total
Applied. I took the liberty to add David's Reviewed-by: back on 3/3 as the only change in 3/3 v3 compared to v2 was actually something he suggested.
Oh, sorry, I thought I'd already R-b'ed all of v3. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson
participants (3)
-
David Gibson
-
Laurent Vivier
-
Stefano Brivio