This series of patches adds vhost-user support to passt and then allows passt to connect to QEMU network backend using virtqueue rather than a socket. With QEMU, rather than using to connect: -netdev stream,id=s,server=off,addr.type=unix,addr.path=/tmp/passt_1.socket we will use: -chardev socket,id=chr0,path=/tmp/passt_1.socket -netdev vhost-user,id=netdev0,chardev=chr0 -device virtio-net,netdev=netdev0 -object memory-backend-memfd,id=memfd0,share=on,size=$RAMSIZE -numa node,memdev=memfd0 The memory backend is needed to share data between passt and QEMU. Performance comparison between "-netdev stream" and "-netdev vhost-user": $ iperf3 -c localhost -p 10001 -t 60 -6 -u -b 50G socket: [ 5] 0.00-60.05 sec 95.6 GBytes 13.7 Gbits/sec 0.017 ms 6998988/10132413 (69%) receiver vhost-user: [ 5] 0.00-60.04 sec 237 GBytes 33.9 Gbits/sec 0.006 ms 53673/7813770 (0.69%) receiver $ iperf3 -c localhost -p 10001 -t 60 -4 -u -b 50G socket: [ 5] 0.00-60.05 sec 98.9 GBytes 14.1 Gbits/sec 0.018 ms 6260735/9501832 (66%) receiver vhost-user: [ 5] 0.00-60.05 sec 235 GBytes 33.7 Gbits/sec 0.008 ms 37581/7752699 (0.48%) receiver $ iperf3 -c localhost -p 10001 -t 60 -6 socket: [ 5] 0.00-60.00 sec 17.3 GBytes 2.48 Gbits/sec 0 sender [ 5] 0.00-60.06 sec 17.3 GBytes 2.48 Gbits/sec receiver vhost-user: [ 5] 0.00-60.00 sec 191 GBytes 27.4 Gbits/sec 0 sender [ 5] 0.00-60.05 sec 191 GBytes 27.3 Gbits/sec receiver $ iperf3 -c localhost -p 10001 -t 60 -4 socket: [ 5] 0.00-60.00 sec 15.6 GBytes 2.24 Gbits/sec 0 sender [ 5] 0.00-60.06 sec 15.6 GBytes 2.24 Gbits/sec receiver vhost-user: [ 5] 0.00-60.00 sec 189 GBytes 27.1 Gbits/sec 0 sender [ 5] 0.00-60.04 sec 189 GBytes 27.0 Gbits/sec receiver v5: - rebase on top of 2024_09_06.6b38f07 - rework udp_vu.c as ref.udp.v6 has been removed and we need to know if we receive IPv4 or IPv6 frame when we prepare the guest buffers for recvmsg() - remove vnet->hdrlen as the size is always the same with virtio-net v1 - address comments from David and Stefano v4: - rebase on top of 2024_08_21.1d6142f (rebasing on top of 620e19a1b48a ("udp: Merge udp[46]_mh_recv arrays") introduces a regression in the measure of the latency with UDP because I think I don't replace correctly ref.udp.v6 that is removed by this commit) - Addressed most of the comments from David and Stefano (I didn't want to postpone this version to next week, so I'll address the remaining comments in the next version). v3: - rebase on top of flow table - update tcp_vu.c to look like udp_vu.c (recv()/prepare()/send_frame()) - address comments from Stefano and David on version 2 v2: - remove PATCH 4 - rewrite PATCH 2 and 3 to follow passt coding style - move some code from PATCH 3 to PATCH 4 (previously PATCH 5) - partially addressed David's comment on PATCH 5 Laurent Vivier (4): packet: replace struct desc by struct iovec vhost-user: introduce virtio API vhost-user: introduce vhost-user API vhost-user: add vhost-user Makefile | 6 +- checksum.c | 1 - conf.c | 23 +- epoll_type.h | 4 + iov.c | 1 - isolation.c | 17 +- packet.c | 91 ++-- packet.h | 22 +- passt.1 | 10 +- passt.c | 26 +- passt.h | 6 + pcap.c | 1 - tap.c | 111 ++++- tap.h | 5 +- tcp.c | 31 +- tcp_buf.c | 8 +- tcp_internal.h | 3 +- tcp_vu.c | 647 +++++++++++++++++++++++++ tcp_vu.h | 12 + udp.c | 78 +-- udp.h | 8 +- udp_internal.h | 34 ++ udp_vu.c | 397 +++++++++++++++ udp_vu.h | 13 + util.h | 8 + vhost_user.c | 1247 ++++++++++++++++++++++++++++++++++++++++++++++++ vhost_user.h | 209 ++++++++ virtio.c | 659 +++++++++++++++++++++++++ virtio.h | 184 +++++++ vu_common.c | 36 ++ vu_common.h | 34 ++ 31 files changed, 3792 insertions(+), 140 deletions(-) create mode 100644 tcp_vu.c create mode 100644 tcp_vu.h create mode 100644 udp_internal.h create mode 100644 udp_vu.c create mode 100644 udp_vu.h create mode 100644 vhost_user.c create mode 100644 vhost_user.h create mode 100644 virtio.c create mode 100644 virtio.h create mode 100644 vu_common.c create mode 100644 vu_common.h -- 2.46.0
To be able to manage buffers inside a shared memory provided by a VM via a vhost-user interface, we cannot rely on the fact that buffers are located in a pre-defined memory area and use a base address and a 32bit offset to address them. We need a 64bit address, so replace struct desc by struct iovec and update range checking. Signed-off-by: Laurent Vivier <lvivier(a)redhat.com> Reviewed-by: David Gibson <david(a)gibson.dropbear.id.au> --- packet.c | 80 ++++++++++++++++++++++++++++++-------------------------- packet.h | 14 ++-------- 2 files changed, 45 insertions(+), 49 deletions(-) diff --git a/packet.c b/packet.c index ccfc84607709..37489961a37e 100644 --- a/packet.c +++ b/packet.c @@ -22,6 +22,35 @@ #include "util.h" #include "log.h" +/** + * packet_check_range() - Check if a packet memory range is valid + * @p: Packet pool + * @offset: Offset of data range in packet descriptor + * @len: Length of desired data range + * @start: Start of the packet descriptor + * @func: For tracing: name of calling function + * @line: For tracing: caller line of function call + * + * Return: 0 if the range is valid, -1 otherwise + */ +static int packet_check_range(const struct pool *p, size_t offset, size_t len, + const char *start, const char *func, int line) +{ + if (start < p->buf) { + trace("packet start %p before buffer start %p, " + "%s:%i", (void *)start, (void *)p->buf, func, line); + return -1; + } + + if (start + len + offset > p->buf + p->buf_size) { + trace("packet offset plus length %lu from size %lu, " + "%s:%i", start - p->buf + len + offset, + p->buf_size, func, line); + return -1; + } + + return 0; +} /** * packet_add_do() - Add data as packet descriptor to given pool * @p: Existing pool @@ -41,34 +70,16 @@ void packet_add_do(struct pool *p, size_t len, const char *start, return; } - if (start < p->buf) { - trace("add packet start %p before buffer start %p, %s:%i", - (void *)start, (void *)p->buf, func, line); + if (packet_check_range(p, 0, len, start, func, line)) return; - } - - if (start + len > p->buf + p->buf_size) { - trace("add packet start %p, length: %zu, buffer end %p, %s:%i", - (void *)start, len, (void *)(p->buf + p->buf_size), - func, line); - return; - } if (len > UINT16_MAX) { trace("add packet length %zu, %s:%i", len, func, line); return; } -#if UINTPTR_MAX == UINT64_MAX - if ((uintptr_t)start - (uintptr_t)p->buf > UINT32_MAX) { - trace("add packet start %p, buffer start %p, %s:%i", - (void *)start, (void *)p->buf, func, line); - return; - } -#endif - - p->pkt[idx].offset = start - p->buf; - p->pkt[idx].len = len; + p->pkt[idx].iov_base = (void *)start; + p->pkt[idx].iov_len = len; p->count++; } @@ -96,36 +107,31 @@ void *packet_get_do(const struct pool *p, size_t idx, size_t offset, return NULL; } - if (len > UINT16_MAX || len + offset > UINT32_MAX) { + if (len > UINT16_MAX) { if (func) { - trace("packet data length %zu, offset %zu, %s:%i", - len, offset, func, line); + trace("packet data length %zu, %s:%i", + len, func, line); } return NULL; } - if (p->pkt[idx].offset + len + offset > p->buf_size) { + if (len + offset > p->pkt[idx].iov_len) { if (func) { - trace("packet offset plus length %zu from size %zu, " - "%s:%i", p->pkt[idx].offset + len + offset, - p->buf_size, func, line); + trace("data length %zu, offset %zu from length %zu, " + "%s:%i", len, offset, p->pkt[idx].iov_len, + func, line); } return NULL; } - if (len + offset > p->pkt[idx].len) { - if (func) { - trace("data length %zu, offset %zu from length %u, " - "%s:%i", len, offset, p->pkt[idx].len, - func, line); - } + if (packet_check_range(p, offset, len, p->pkt[idx].iov_base, + func, line)) return NULL; - } if (left) - *left = p->pkt[idx].len - offset - len; + *left = p->pkt[idx].iov_len - offset - len; - return p->buf + p->pkt[idx].offset + offset; + return (char *)p->pkt[idx].iov_base + offset; } /** diff --git a/packet.h b/packet.h index a784b07bbed5..8377dcf678bb 100644 --- a/packet.h +++ b/packet.h @@ -6,16 +6,6 @@ #ifndef PACKET_H #define PACKET_H -/** - * struct desc - Generic offset-based descriptor within buffer - * @offset: Offset of descriptor relative to buffer start, 32-bit limit - * @len: Length of descriptor, host order, 16-bit limit - */ -struct desc { - uint32_t offset; - uint16_t len; -}; - /** * struct pool - Generic pool of packets stored in a buffer * @buf: Buffer storing packet descriptors @@ -29,7 +19,7 @@ struct pool { size_t buf_size; size_t size; size_t count; - struct desc pkt[1]; + struct iovec pkt[1]; }; void packet_add_do(struct pool *p, size_t len, const char *start, @@ -54,7 +44,7 @@ struct _name ## _t { \ size_t buf_size; \ size_t size; \ size_t count; \ - struct desc pkt[_size]; \ + struct iovec pkt[_size]; \ } #define PACKET_POOL_INIT_NOCAST(_size, _buf, _buf_size) \ -- 2.46.0
Add virtio.c and virtio.h that define the functions needed to manage virtqueues. Signed-off-by: Laurent Vivier <lvivier(a)redhat.com> --- Makefile | 4 +- util.h | 8 + virtio.c | 665 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ virtio.h | 183 +++++++++++++++ 4 files changed, 858 insertions(+), 2 deletions(-) create mode 100644 virtio.c create mode 100644 virtio.h diff --git a/Makefile b/Makefile index 74a95130d082..a2258891f104 100644 --- a/Makefile +++ b/Makefile @@ -54,7 +54,7 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS) PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \ icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \ ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \ - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c + tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c virtio.c QRAP_SRCS = qrap.c SRCS = $(PASST_SRCS) $(QRAP_SRCS) @@ -64,7 +64,7 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \ flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \ lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \ siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \ - udp.h udp_flow.h util.h + udp.h udp_flow.h util.h virtio.h HEADERS = $(PASST_HEADERS) seccomp.h C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 }; diff --git a/util.h b/util.h index c7a59d5de14b..1d2c36126023 100644 --- a/util.h +++ b/util.h @@ -131,6 +131,14 @@ static inline uint32_t ntohl_unaligned(const void *p) return ntohl(val); } +static inline void barrier(void) { __asm__ __volatile__("" ::: "memory"); } +#define smp_mb() do { barrier(); __atomic_thread_fence(__ATOMIC_SEQ_CST); } while (0) +#define smp_mb_release() do { barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); } while (0) +#define smp_mb_acquire() do { barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); } while (0) + +#define smp_wmb() smp_mb_release() +#define smp_rmb() smp_mb_acquire() + #define NS_FN_STACK_SIZE (RLIMIT_STACK_VAL * 1024 / 8) int do_clone(int (*fn)(void *), char *stack_area, size_t stack_size, int flags, void *arg); diff --git a/virtio.c b/virtio.c new file mode 100644 index 000000000000..380590afbca3 --- /dev/null +++ b/virtio.c @@ -0,0 +1,665 @@ +// SPDX-License-Identifier: GPL-2.0-or-later AND BSD-3-Clause +/* + * virtio API, vring and virtqueue functions definition + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +/* Some parts copied from QEMU subprojects/libvhost-user/libvhost-user.c + * originally licensed under the following terms: + * + * -- + * + * Copyright IBM, Corp. 2007 + * Copyright (c) 2016 Red Hat, Inc. + * + * Authors: + * Anthony Liguori <aliguori(a)us.ibm.com> + * Marc-André Lureau <mlureau(a)redhat.com> + * Victor Kaplansky <victork(a)redhat.com> + * + * This work is licensed under the terms of the GNU GPL, version 2 or + * later. See the COPYING file in the top-level directory. + * + * Some parts copied from QEMU hw/virtio/virtio.c + * licensed under the following terms: + * + * Copyright IBM, Corp. 2007 + * + * Authors: + * Anthony Liguori <aliguori(a)us.ibm.com> + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + * + * -- + * + * virtq_used_event() and virtq_avail_event() from + * https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html… + * licensed under the following terms: + * + * -- + * + * This header is BSD licensed so anyone can use the definitions + * to implement compatible drivers/servers. + * + * Copyright 2007, 2009, IBM Corporation + * Copyright 2011, Red Hat, Inc + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. Neither the name of IBM nor the names of its contributors + * may be used to endorse or promote products derived from this software + * without specific prior written permission. + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ‘‘AS IS’’ AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +#include <stddef.h> +#include <endian.h> +#include <string.h> +#include <errno.h> +#include <sys/eventfd.h> +#include <sys/socket.h> + +#include "util.h" +#include "virtio.h" + +#define VIRTQUEUE_MAX_SIZE 1024 + +/** + * vu_gpa_to_va() - Translate guest physical address to our virtual address. + * @dev: Vhost-user device + * @plen: Physical length to map (input), capped to region (output) + * @guest_addr: Guest physical address + * + * Return: virtual address in our address space of the guest physical address + */ +static void *vu_gpa_to_va(struct vu_dev *dev, uint64_t *plen, uint64_t guest_addr) +{ + unsigned int i; + + if (*plen == 0) + return NULL; + + /* Find matching memory region. */ + for (i = 0; i < dev->nregions; i++) { + const struct vu_dev_region *r = &dev->regions[i]; + + if ((guest_addr >= r->gpa) && + (guest_addr < (r->gpa + r->size))) { + if ((guest_addr + *plen) > (r->gpa + r->size)) + *plen = r->gpa + r->size - guest_addr; + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */ + return (void *)(guest_addr - r->gpa + r->mmap_addr + + r->mmap_offset); + } + } + + return NULL; +} + +/** + * vring_avail_flags() - Read the available ring flags + * @vq: Virtqueue + * + * Return: the available ring descriptor flags of the given virtqueue + */ +static inline uint16_t vring_avail_flags(const struct vu_virtq *vq) +{ + return le16toh(vq->vring.avail->flags); +} + +/** + * vring_avail_idx() - Read the available ring index + * @vq: Virtqueue + * + * Return: the available ring index of the given virtqueue + */ +static inline uint16_t vring_avail_idx(struct vu_virtq *vq) +{ + vq->shadow_avail_idx = le16toh(vq->vring.avail->idx); + + return vq->shadow_avail_idx; +} + +/** + * vring_avail_ring() - Read an available ring entry + * @vq: Virtqueue + * @i: Index of the entry to read + * + * Return: the ring entry content (head of the descriptor chain) + */ +static inline uint16_t vring_avail_ring(const struct vu_virtq *vq, int i) +{ + return le16toh(vq->vring.avail->ring[i]); +} + +/** + * virtq_used_event - Get location of used event indices + * (only with VIRTIO_F_EVENT_IDX) + * @vq Virtqueue + * + * Return: return the location of the used event index + */ +static inline uint16_t *virtq_used_event(const struct vu_virtq *vq) +{ + /* For backwards compat, used event index is at *end* of avail ring. */ + return &vq->vring.avail->ring[vq->vring.num]; +} + +/** + * vring_get_used_event() - Get the used event from the available ring + * @vq Virtqueue + * + * Return: the used event (available only if VIRTIO_RING_F_EVENT_IDX is set) + * used_event is a performant alternative where the driver + * specifies how far the device can progress before a notification + * is required. + */ +static inline uint16_t vring_get_used_event(const struct vu_virtq *vq) +{ + return le16toh(*virtq_used_event(vq)); +} + +/** + * virtqueue_get_head() - Get the head of the descriptor chain for a given + * index + * @vq: Virtqueue + * @idx: Available ring entry index + * @head: Head of the descriptor chain + */ +static void virtqueue_get_head(const struct vu_virtq *vq, + unsigned int idx, unsigned int *head) +{ + /* Grab the next descriptor number they're advertising, and increment + * the index we've seen. + */ + *head = vring_avail_ring(vq, idx % vq->vring.num); + + /* If their number is silly, that's a fatal mistake. */ + if (*head >= vq->vring.num) + die("vhost-user: Guest says index %u is available", *head); +} + +/** + * virtqueue_read_indirect_desc() - Copy virtio ring descriptors from guest + * memory + * @dev: Vhost-user device + * @desc: Destination address to copy the descriptors to + * @addr: Guest memory address to copy from + * @len: Length of memory to copy + * + * Return: -1 if there is an error, 0 otherwise + */ +static int virtqueue_read_indirect_desc(struct vu_dev *dev, struct vring_desc *desc, + uint64_t addr, size_t len) +{ + uint64_t read_len; + + if (len > (VIRTQUEUE_MAX_SIZE * sizeof(struct vring_desc))) + return -1; + + if (len == 0) + return -1; + + while (len) { + const struct vring_desc *orig_desc; + + read_len = len; + orig_desc = vu_gpa_to_va(dev, &read_len, addr); + if (!orig_desc) + return -1; + + memcpy(desc, orig_desc, read_len); + len -= read_len; + addr += read_len; + desc += read_len / sizeof(struct vring_desc); + } + + return 0; +} + +/** + * enum virtqueue_read_desc_state - State in the descriptor chain + * @VIRTQUEUE_READ_DESC_ERROR Found an invalid descriptor + * @VIRTQUEUE_READ_DESC_DONE No more descriptors in the chain + * @VIRTQUEUE_READ_DESC_MORE there are more descriptors in the chain + */ +enum virtqueue_read_desc_state { + VIRTQUEUE_READ_DESC_ERROR = -1, + VIRTQUEUE_READ_DESC_DONE = 0, /* end of chain */ + VIRTQUEUE_READ_DESC_MORE = 1, /* more buffers in chain */ +}; + +/** + * virtqueue_read_next_desc() - Read the the next descriptor in the chain + * @desc: Virtio ring descriptors + * @i: Index of the current descriptor + * @max: Maximum value of the descriptor index + * @next: Index of the next descriptor in the chain (output value) + * + * Return: current chain descriptor state (error, next, done) + */ +static int virtqueue_read_next_desc(const struct vring_desc *desc, + int i, unsigned int max, unsigned int *next) +{ + /* If this descriptor says it doesn't chain, we're done. */ + if (!(le16toh(desc[i].flags) & VRING_DESC_F_NEXT)) + return VIRTQUEUE_READ_DESC_DONE; + + /* Check they're not leading us off end of descriptors. */ + *next = le16toh(desc[i].next); + /* Make sure compiler knows to grab that: we don't want it changing! */ + smp_wmb(); + + if (*next >= max) + return VIRTQUEUE_READ_DESC_ERROR; + + return VIRTQUEUE_READ_DESC_MORE; +} + +/** + * vu_queue_empty() - Check if virtqueue is empty + * @vq: Virtqueue + * + * Return: true if the virtqueue is empty, false otherwise + */ +bool vu_queue_empty(struct vu_virtq *vq) +{ + if (!vq->vring.avail) + return true; + + if (vq->shadow_avail_idx != vq->last_avail_idx) + return false; + + return vring_avail_idx(vq) == vq->last_avail_idx; +} + +/** + * vring_can_notify() - Check if a notification can be sent + * @dev: Vhost-user device + * @vq: Virtqueue + * + * Return: true if notification can be sent + */ +static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq) +{ + uint16_t old, new; + bool v; + + /* We need to expose used array entries before checking used event. */ + smp_mb(); + + /* Always notify when queue is empty (when feature acknowledge) */ + if (vu_has_feature(dev, VIRTIO_F_NOTIFY_ON_EMPTY) && + !vq->inuse && vu_queue_empty(vq)) + return true; + + if (!vu_has_feature(dev, VIRTIO_RING_F_EVENT_IDX)) + return !(vring_avail_flags(vq) & VRING_AVAIL_F_NO_INTERRUPT); + + v = vq->signalled_used_valid; + vq->signalled_used_valid = true; + old = vq->signalled_used; + new = vq->signalled_used = vq->used_idx; + return !v || vring_need_event(vring_get_used_event(vq), new, old); +} + +/** + * vu_queue_notify() - Send a notification to the given virtqueue + * @dev: Vhost-user device + * @vq: Virtqueue + */ +/* cppcheck-suppress unusedFunction */ +void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq) +{ + if (!vq->vring.avail) + return; + + if (!vring_can_notify(dev, vq)) { + debug("vhost-user: virtqueue can skip notify..."); + return; + } + + if (eventfd_write(vq->call_fd, 1) < 0) + die_perror("Error writing vhost-user queue eventfd"); +} + +/* virtq_avail_event() - Get location of available event indices + * (only with VIRTIO_F_EVENT_IDX) + * @vq: Virtqueue + * + * Return: return the location of the available event index + */ +static inline uint16_t *virtq_avail_event(const struct vu_virtq *vq) +{ + /* For backwards compat, avail event index is at *end* of used ring. */ + return (uint16_t *)&vq->vring.used->ring[vq->vring.num]; +} + +/** + * vring_set_avail_event() - Set avail_event + * @vq: Virtqueue + * @val: Value to set to avail_event + * avail_event is used in the same way the used_event is in the + * avail_ring. + * avail_event is used to advise the driver that notifications + * are unnecessary until the driver writes entry with an index + * specified by avail_event into the available ring. + */ +static inline void vring_set_avail_event(const struct vu_virtq *vq, + uint16_t val) +{ + uint16_t val_le = htole16(val); + + if (!vq->notification) + return; + + memcpy(virtq_avail_event(vq), &val_le, sizeof(val_le)); +} + +/** + * virtqueue_map_desc() - Translate descriptor ring physical address into our + * virtual address space + * @dev: Vhost-user device + * @p_num_sg: First iov entry to use (input), + * first iov entry not used (output) + * @iov: Iov array to use to store buffer virtual addresses + * @max_num_sg: Maximum number of iov entries + * @pa: Guest physical address of the buffer to map into our virtual + * address + * @sz: Size of the buffer + * + * Return: false on error, true otherwise + */ +static bool virtqueue_map_desc(struct vu_dev *dev, + unsigned int *p_num_sg, struct iovec *iov, + unsigned int max_num_sg, + uint64_t pa, size_t sz) +{ + unsigned int num_sg = *p_num_sg; + + ASSERT(num_sg < max_num_sg); + ASSERT(sz); + + while (sz) { + uint64_t len = sz; + + iov[num_sg].iov_base = vu_gpa_to_va(dev, &len, pa); + if (iov[num_sg].iov_base == NULL) + die("vhost-user: invalid address for buffers"); + iov[num_sg].iov_len = len; + num_sg++; + sz -= len; + pa += len; + } + + *p_num_sg = num_sg; + return true; +} + +/** + * vu_queue_map_desc - Map the virtqueue descriptor ring into our virtual + * address space + * @dev: Vhost-user device + * @vq: Virtqueue + * @idx: First descriptor ring entry to map + * @elem: Virtqueue element to store descriptor ring iov + * + * Return: -1 if there is an error, 0 otherwise + */ +static int vu_queue_map_desc(struct vu_dev *dev, struct vu_virtq *vq, unsigned int idx, + struct vu_virtq_element *elem) +{ + const struct vring_desc *desc = vq->vring.desc; + struct vring_desc desc_buf[VIRTQUEUE_MAX_SIZE]; + unsigned int out_num = 0, in_num = 0; + unsigned int max = vq->vring.num; + unsigned int i = idx; + uint64_t read_len; + int rc; + + if (le16toh(desc[i].flags) & VRING_DESC_F_INDIRECT) { + unsigned int desc_len; + uint64_t desc_addr; + + if (le32toh(desc[i].len) % sizeof(struct vring_desc)) + die("vhost-user: Invalid size for indirect buffer table"); + + /* loop over the indirect descriptor table */ + desc_addr = le64toh(desc[i].addr); + desc_len = le32toh(desc[i].len); + max = desc_len / sizeof(struct vring_desc); + read_len = desc_len; + desc = vu_gpa_to_va(dev, &read_len, desc_addr); + if (desc && read_len != desc_len) { + /* Failed to use zero copy */ + desc = NULL; + if (!virtqueue_read_indirect_desc(dev, desc_buf, desc_addr, desc_len)) + desc = desc_buf; + } + if (!desc) + die("vhost-user: Invalid indirect buffer table"); + i = 0; + } + + /* Collect all the descriptors */ + do { + if (le16toh(desc[i].flags) & VRING_DESC_F_WRITE) { + if (!virtqueue_map_desc(dev, &in_num, elem->in_sg, + elem->in_num, + le64toh(desc[i].addr), + le32toh(desc[i].len))) + return -1; + } else { + if (in_num) + die("Incorrect order for descriptors"); + if (!virtqueue_map_desc(dev, &out_num, elem->out_sg, + elem->out_num, + le64toh(desc[i].addr), + le32toh(desc[i].len))) { + return -1; + } + } + + /* If we've got too many, that implies a descriptor loop. */ + if ((in_num + out_num) > max) + die("vhost-user: Loop in queue descriptor list"); + rc = virtqueue_read_next_desc(desc, i, max, &i); + } while (rc == VIRTQUEUE_READ_DESC_MORE); + + if (rc == VIRTQUEUE_READ_DESC_ERROR) + die("vhost-user: Failed to read descriptor list"); + + elem->index = idx; + elem->in_num = in_num; + elem->out_num = out_num; + + return 0; +} + +/** + * vu_queue_pop() - Pop an entry from the virtqueue + * @dev: Vhost-user device + * @vq: Virtqueue + * @elem: Virtqueue element to file with the entry information + * + * Return: -1 if there is an error, 0 otherwise + */ +/* cppcheck-suppress unusedFunction */ +int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_element *elem) +{ + unsigned int head; + int ret; + + if (!vq->vring.avail) + return -1; + + if (vu_queue_empty(vq)) + return -1; + + /* Needed after vu_queue_empty(), see comment in + * virtqueue_num_heads(). + */ + smp_rmb(); + + if (vq->inuse >= vq->vring.num) + die("vhost-user queue size exceeded"); + + virtqueue_get_head(vq, vq->last_avail_idx++, &head); + + if (vu_has_feature(dev, VIRTIO_RING_F_EVENT_IDX)) + vring_set_avail_event(vq, vq->last_avail_idx); + + ret = vu_queue_map_desc(dev, vq, head, elem); + + if (ret < 0) + return ret; + + vq->inuse++; + + return 0; +} + +/** + * vu_queue_detach_element() - Detach an element from the virqueue + * @vq: Virtqueue + */ +void vu_queue_detach_element(struct vu_virtq *vq) +{ + vq->inuse--; + /* unmap, when DMA support is added */ +} + +/** + * vu_queue_unpop() - Push back the previously popped element from the virqueue + * @vq: Virtqueue + */ +/* cppcheck-suppress unusedFunction */ +void vu_queue_unpop(struct vu_virtq *vq) +{ + vq->last_avail_idx--; + vu_queue_detach_element(vq); +} + +/** + * vu_queue_rewind() - Push back a given number of popped elements + * @vq: Virtqueue + * @num: Number of element to unpop + */ +/* cppcheck-suppress unusedFunction */ +bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num) +{ + if (num > vq->inuse) + return false; + + vq->last_avail_idx -= num; + vq->inuse -= num; + return true; +} + +/** + * vring_used_write() - Write an entry in the used ring + * @vq: Virtqueue + * @uelem: Entry to write + * @i: Index of the entry in the used ring + */ +static inline void vring_used_write(struct vu_virtq *vq, + const struct vring_used_elem *uelem, int i) +{ + struct vring_used *used = vq->vring.used; + + used->ring[i] = *uelem; +} + +/** + * vu_queue_fill_by_index() - Update information of a descriptor ring entry + * in the used ring + * @vq: Virtqueue + * @index: Descriptor ring index + * @len: Size of the element + * @idx: Used ring entry index + */ +void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index, + unsigned int len, unsigned int idx) +{ + struct vring_used_elem uelem; + + if (!vq->vring.avail) + return; + + idx = (idx + vq->used_idx) % vq->vring.num; + + uelem.id = htole32(index); + uelem.len = htole32(len); + vring_used_write(vq, &uelem, idx); +} + +/** + * vu_queue_fill() - Update information of a given element in the used ring + * @dev: Vhost-user device + * @vq: Virtqueue + * @elem: Element information to fill + * @len: Size of the element + * @idx: Used ring entry index + */ +/* cppcheck-suppress unusedFunction */ +void vu_queue_fill(struct vu_virtq *vq, const struct vu_virtq_element *elem, + unsigned int len, unsigned int idx) +{ + vu_queue_fill_by_index(vq, elem->index, len, idx); +} + +/** + * vring_used_idx_set() - Set the descriptor ring current index + * @vq: Virtqueue + * @val: Value to set in the index + */ +static inline void vring_used_idx_set(struct vu_virtq *vq, uint16_t val) +{ + vq->vring.used->idx = htole16(val); + + vq->used_idx = val; +} + +/** + * vu_queue_flush() - Flush the virtqueue + * @vq: Virtqueue + * @count: Number of entry to flush + */ +/* cppcheck-suppress unusedFunction */ +void vu_queue_flush(struct vu_virtq *vq, unsigned int count) +{ + uint16_t old, new; + + if (!vq->vring.avail) + return; + + /* Make sure buffer is written before we update index. */ + smp_wmb(); + + old = vq->used_idx; + new = old + count; + vring_used_idx_set(vq, new); + vq->inuse -= count; + if ((uint16_t)(new - vq->signalled_used) < (uint16_t)(new - old)) + vq->signalled_used_valid = false; +} diff --git a/virtio.h b/virtio.h new file mode 100644 index 000000000000..94efeb049fbc --- /dev/null +++ b/virtio.h @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * virtio API, vring and virtqueue functions definition + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef VIRTIO_H +#define VIRTIO_H + +#include <stdbool.h> +#include <linux/vhost_types.h> + +/* Maximum size of a virtqueue */ +#define VIRTQUEUE_MAX_SIZE 1024 + +/** + * struct vu_ring - Virtqueue rings + * @num: Size of the queue + * @desc: Descriptor ring + * @avail: Available ring + * @used: Used ring + * @log_guest_addr: Guest address for logging + * @flags: Vring flags + * VHOST_VRING_F_LOG is set if log address is valid + */ +struct vu_ring { + unsigned int num; + struct vring_desc *desc; + struct vring_avail *avail; + struct vring_used *used; + uint64_t log_guest_addr; + uint32_t flags; +}; + +/** + * struct vu_virtq - Virtqueue definition + * @vring: Virtqueue rings + * @last_avail_idx: Next head to pop + * @shadow_avail_idx: Last avail_idx read from VQ. + * @used_idx: Descriptor ring current index + * @signalled_used: Last used index value we have signalled on + * @signalled_used_valid: True if signalled_used if valid + * @notification: True if the queues notify (via event + * index or interrupt) + * @inuse: Number of entries in use + * @call_fd: The event file descriptor to signal when + * buffers are used. + * @kick_fd: The event file descriptor for adding + * buffers to the vring + * @err_fd: The event file descriptor to signal when + * error occurs + * @enable: True if the virtqueue is enabled + * @started: True if the virtqueue is started + * @vra: QEMU address of our rings + */ +struct vu_virtq { + struct vu_ring vring; + uint16_t last_avail_idx; + uint16_t shadow_avail_idx; + uint16_t used_idx; + uint16_t signalled_used; + bool signalled_used_valid; + bool notification; + unsigned int inuse; + int call_fd; + int kick_fd; + int err_fd; + unsigned int enable; + bool started; + struct vhost_vring_addr vra; +}; + +/** + * struct vu_dev_region - guest shared memory region + * @gpa: Guest physical address of the region + * @size: Memory size in bytes + * @qva: QEMU virtual address + * @mmap_offset: Offset where the region starts in the mapped memory + * @mmap_addr: Address of the mapped memory + */ +struct vu_dev_region { + uint64_t gpa; + uint64_t size; + uint64_t qva; + uint64_t mmap_offset; + uint64_t mmap_addr; +}; + +#define VHOST_USER_MAX_QUEUES 2 + +/* + * Set a reasonable maximum number of ram slots, which will be supported by + * any architecture. + */ +#define VHOST_USER_MAX_RAM_SLOTS 32 + +/** + * struct vu_dev - vhost-user device information + * @context: Execution context + * @nregions: Number of shared memory regions + * @regions: Guest shared memory regions + * @features: Vhost-user features + * @protocol_features: Vhost-user protocol features + */ +struct vu_dev { + uint32_t nregions; + struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS]; + struct vu_virtq vq[VHOST_USER_MAX_QUEUES]; + uint64_t features; + uint64_t protocol_features; +}; + +/** + * struct vu_virtq_element - virtqueue element + * @index: Descriptor ring index + * @out_num: Number of outgoing iovec buffers + * @in_num: Number of incoming iovec buffers + * @in_sg: Incoming iovec buffers + * @out_sg: Outgoing iovec buffers + */ +struct vu_virtq_element { + unsigned int index; + unsigned int out_num; + unsigned int in_num; + struct iovec *in_sg; + struct iovec *out_sg; +}; + +/** + * has_feature() - Check a feature bit in a features set + * @features: Features set + * @fb: Feature bit to check + * + * Return: True if the feature bit is set + */ +static inline bool has_feature(uint64_t features, unsigned int fbit) +{ + return !!(features & (1ULL << fbit)); +} + +/** + * vu_has_feature() - Check if a virtio-net feature is available + * @vdev: Vhost-user device + * @bit: Feature to check + * + * Return: True if the feature is available + */ +static inline bool vu_has_feature(const struct vu_dev *vdev, + unsigned int fbit) +{ + return has_feature(vdev->features, fbit); +} + +/** + * vu_has_protocol_feature() - Check if a vhost-user feature is available + * @vdev: Vhost-user device + * @bit: Feature to check + * + * Return: True if the feature is available + */ +/* cppcheck-suppress unusedFunction */ +static inline bool vu_has_protocol_feature(const struct vu_dev *vdev, + unsigned int fbit) +{ + return has_feature(vdev->protocol_features, fbit); +} + +bool vu_queue_empty(struct vu_virtq *vq); +void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq); +int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, + struct vu_virtq_element *elem); +void vu_queue_detach_element(struct vu_virtq *vq); +void vu_queue_unpop(struct vu_virtq *vq); +bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num); +void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index, + unsigned int len, unsigned int idx); +void vu_queue_fill(struct vu_virtq *vq, + const struct vu_virtq_element *elem, unsigned int len, + unsigned int idx); +void vu_queue_flush(struct vu_virtq *vq, unsigned int count); +#endif /* VIRTIO_H */ -- 2.46.0
Add vhost_user.c and vhost_user.h that define the functions needed to implement vhost-user backend. Signed-off-by: Laurent Vivier <lvivier(a)redhat.com> --- Makefile | 4 +- iov.c | 1 - vhost_user.c | 1245 ++++++++++++++++++++++++++++++++++++++++++++++++++ vhost_user.h | 209 +++++++++ virtio.c | 5 - virtio.h | 1 + 6 files changed, 1457 insertions(+), 8 deletions(-) create mode 100644 vhost_user.c create mode 100644 vhost_user.h diff --git a/Makefile b/Makefile index a2258891f104..0e8ed60a0da1 100644 --- a/Makefile +++ b/Makefile @@ -54,7 +54,7 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS) PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \ icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \ ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \ - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c virtio.c + tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c QRAP_SRCS = qrap.c SRCS = $(PASST_SRCS) $(QRAP_SRCS) @@ -64,7 +64,7 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \ flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \ lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \ siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \ - udp.h udp_flow.h util.h virtio.h + udp.h udp_flow.h util.h vhost_user.h virtio.h HEADERS = $(PASST_HEADERS) seccomp.h C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 }; diff --git a/iov.c b/iov.c index 3f9e229a305f..3741db21790f 100644 --- a/iov.c +++ b/iov.c @@ -68,7 +68,6 @@ size_t iov_skip_bytes(const struct iovec *iov, size_t n, * * Returns: The number of bytes successfully copied. */ -/* cppcheck-suppress unusedFunction */ size_t iov_from_buf(const struct iovec *iov, size_t iov_cnt, size_t offset, const void *buf, size_t bytes) { diff --git a/vhost_user.c b/vhost_user.c new file mode 100644 index 000000000000..3b38e06f268e --- /dev/null +++ b/vhost_user.c @@ -0,0 +1,1245 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * vhost-user API, command management and virtio interface + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * Some parts from QEMU subprojects/libvhost-user/libvhost-user.c + * licensed under the following terms: + * + * Copyright IBM, Corp. 2007 + * Copyright (c) 2016 Red Hat, Inc. + * + * Authors: + * Anthony Liguori <aliguori(a)us.ibm.com> + * Marc-André Lureau <mlureau(a)redhat.com> + * Victor Kaplansky <victork(a)redhat.com> + * + * This work is licensed under the terms of the GNU GPL, version 2 or + * later. See the COPYING file in the top-level directory. + */ + +#include <errno.h> +#include <fcntl.h> +#include <stdlib.h> +#include <stdio.h> +#include <stdint.h> +#include <stddef.h> +#include <string.h> +#include <assert.h> +#include <stdbool.h> +#include <inttypes.h> +#include <time.h> +#include <net/ethernet.h> +#include <netinet/in.h> +#include <sys/epoll.h> +#include <sys/eventfd.h> +#include <sys/mman.h> +#include <linux/vhost_types.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "passt.h" +#include "tap.h" +#include "vhost_user.h" + +/* vhost-user version we are compatible with */ +#define VHOST_USER_VERSION 1 + +/** + * vu_print_capabilities() - print vhost-user capabilities + * this is part of the vhost-user backend + * convention. + */ +/* cppcheck-suppress unusedFunction */ +void vu_print_capabilities(void) +{ + info("{"); + info(" \"type\": \"net\""); + info("}"); + exit(EXIT_SUCCESS); +} + +/** + * vu_request_to_string() - convert a vhost-user request number to its name + * @req: request number + * + * Return: the name of request number + */ +static const char *vu_request_to_string(unsigned int req) +{ + if (req < VHOST_USER_MAX) { +#define REQ(req) [req] = #req + static const char * const vu_request_str[VHOST_USER_MAX] = { + REQ(VHOST_USER_NONE), + REQ(VHOST_USER_GET_FEATURES), + REQ(VHOST_USER_SET_FEATURES), + REQ(VHOST_USER_SET_OWNER), + REQ(VHOST_USER_RESET_OWNER), + REQ(VHOST_USER_SET_MEM_TABLE), + REQ(VHOST_USER_SET_LOG_BASE), + REQ(VHOST_USER_SET_LOG_FD), + REQ(VHOST_USER_SET_VRING_NUM), + REQ(VHOST_USER_SET_VRING_ADDR), + REQ(VHOST_USER_SET_VRING_BASE), + REQ(VHOST_USER_GET_VRING_BASE), + REQ(VHOST_USER_SET_VRING_KICK), + REQ(VHOST_USER_SET_VRING_CALL), + REQ(VHOST_USER_SET_VRING_ERR), + REQ(VHOST_USER_GET_PROTOCOL_FEATURES), + REQ(VHOST_USER_SET_PROTOCOL_FEATURES), + REQ(VHOST_USER_GET_QUEUE_NUM), + REQ(VHOST_USER_SET_VRING_ENABLE), + REQ(VHOST_USER_SEND_RARP), + REQ(VHOST_USER_NET_SET_MTU), + REQ(VHOST_USER_SET_BACKEND_REQ_FD), + REQ(VHOST_USER_IOTLB_MSG), + REQ(VHOST_USER_SET_VRING_ENDIAN), + REQ(VHOST_USER_GET_CONFIG), + REQ(VHOST_USER_SET_CONFIG), + REQ(VHOST_USER_POSTCOPY_ADVISE), + REQ(VHOST_USER_POSTCOPY_LISTEN), + REQ(VHOST_USER_POSTCOPY_END), + REQ(VHOST_USER_GET_INFLIGHT_FD), + REQ(VHOST_USER_SET_INFLIGHT_FD), + REQ(VHOST_USER_GPU_SET_SOCKET), + REQ(VHOST_USER_VRING_KICK), + REQ(VHOST_USER_GET_MAX_MEM_SLOTS), + REQ(VHOST_USER_ADD_MEM_REG), + REQ(VHOST_USER_REM_MEM_REG), + }; +#undef REQ + return vu_request_str[req]; + } + + return "unknown"; +} + +/** + * qva_to_va() - Translate front-end (QEMU) virtual address to our virtual + * address + * @dev: vhost-user device + * @qemu_addr: front-end userspace address + * + * Return: the memory address in our process virtual address space. + */ +static void *qva_to_va(struct vu_dev *dev, uint64_t qemu_addr) +{ + unsigned int i; + + /* Find matching memory region. */ + for (i = 0; i < dev->nregions; i++) { + const struct vu_dev_region *r = &dev->regions[i]; + + if ((qemu_addr >= r->qva) && (qemu_addr < (r->qva + r->size))) { + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */ + return (void *)(qemu_addr - r->qva + r->mmap_addr + + r->mmap_offset); + } + } + + return NULL; +} + +/** + * vmsg_close_fds() - Close all file descriptors of a given message + * @vmsg: vhost-user message with the list of the file descriptors + */ +static void vmsg_close_fds(const struct vhost_user_msg *vmsg) +{ + int i; + + for (i = 0; i < vmsg->fd_num; i++) + close(vmsg->fds[i]); +} + +/** + * vu_remove_watch() - Remove a file descriptor from our passt epoll + * file descriptor + * @vdev: vhost-user device + * @fd: file descriptor to remove + */ +static void vu_remove_watch(const struct vu_dev *vdev, int fd) +{ + /* Placeholder to add passt related code */ + (void)vdev; + (void)fd; +} + +/** + * vmsg_set_reply_u64() - Set reply payload.u64 and clear request flags + * and fd_num + * @vmsg: vhost-user message + * @val: 64-bit value to reply + */ +static void vmsg_set_reply_u64(struct vhost_user_msg *vmsg, uint64_t val) +{ + vmsg->hdr.flags = 0; /* defaults will be set by vu_send_reply() */ + vmsg->hdr.size = sizeof(vmsg->payload.u64); + vmsg->payload.u64 = val; + vmsg->fd_num = 0; +} + +/** + * vu_message_read_default() - Read incoming vhost-user message from the + * front-end + * @conn_fd: vhost-user command socket + * @vmsg: vhost-user message + * + * Return: 0 if recvmsg() has been interrupted or if there's no data to read, + * 1 if a message has been received + */ +static int vu_message_read_default(int conn_fd, struct vhost_user_msg *vmsg) +{ + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * + sizeof(int))] = { 0 }; + struct iovec iov = { + .iov_base = (char *)vmsg, + .iov_len = VHOST_USER_HDR_SIZE, + }; + struct msghdr msg = { + .msg_iov = &iov, + .msg_iovlen = 1, + .msg_control = control, + .msg_controllen = sizeof(control), + }; + ssize_t ret, sz_payload; + struct cmsghdr *cmsg; + + ret = recvmsg(conn_fd, &msg, MSG_DONTWAIT); + if (ret < 0) { + if (errno == EINTR || errno == EAGAIN || errno == EWOULDBLOCK) + return 0; + die_perror("vhost-user message receive (recvmsg)"); + } + + vmsg->fd_num = 0; + for (cmsg = CMSG_FIRSTHDR(&msg); cmsg != NULL; + cmsg = CMSG_NXTHDR(&msg, cmsg)) { + if (cmsg->cmsg_level == SOL_SOCKET && + cmsg->cmsg_type == SCM_RIGHTS) { + size_t fd_size; + + ASSERT(cmsg->cmsg_len >= CMSG_LEN(0)); + fd_size = cmsg->cmsg_len - CMSG_LEN(0); + ASSERT(fd_size <= sizeof(vmsg->fds)); + vmsg->fd_num = fd_size / sizeof(int); + memcpy(vmsg->fds, CMSG_DATA(cmsg), fd_size); + break; + } + } + + sz_payload = vmsg->hdr.size; + if ((size_t)sz_payload > sizeof(vmsg->payload)) { + die("vhost-user message request too big: %d," + " size: vmsg->size: %zd, " + "while sizeof(vmsg->payload) = %zu", + vmsg->hdr.request, sz_payload, sizeof(vmsg->payload)); + } + + if (sz_payload) { + do + ret = recv(conn_fd, &vmsg->payload, sz_payload, 0); + while (ret < 0 && errno == EINTR); + + if (ret < 0) + die_perror("vhost-user message receive"); + + if (ret == 0) + die("EOF on vhost-user message receive"); + + if (ret < sz_payload) + die("Short-read on vhost-user message receive"); + } + + return 1; +} + +/** + * vu_message_write() - Send a message to the front-end + * @conn_fd: vhost-user command socket + * @vmsg: vhost-user message + * + * #syscalls:vu sendmsg + */ +static void vu_message_write(int conn_fd, struct vhost_user_msg *vmsg) +{ + char control[CMSG_SPACE(VHOST_MEMORY_BASELINE_NREGIONS * sizeof(int))] = { 0 }; + struct iovec iov = { + .iov_base = (char *)vmsg, + .iov_len = VHOST_USER_HDR_SIZE + vmsg->hdr.size, + }; + struct msghdr msg = { + .msg_iov = &iov, + .msg_iovlen = 1, + .msg_control = control, + }; + int rc; + + ASSERT(vmsg->fd_num <= VHOST_MEMORY_BASELINE_NREGIONS); + if (vmsg->fd_num > 0) { + size_t fdsize = vmsg->fd_num * sizeof(int); + struct cmsghdr *cmsg; + + msg.msg_controllen = CMSG_SPACE(fdsize); + cmsg = CMSG_FIRSTHDR(&msg); + cmsg->cmsg_len = CMSG_LEN(fdsize); + cmsg->cmsg_level = SOL_SOCKET; + cmsg->cmsg_type = SCM_RIGHTS; + memcpy(CMSG_DATA(cmsg), vmsg->fds, fdsize); + } + + do + rc = sendmsg(conn_fd, &msg, 0); + while (rc < 0 && errno == EINTR); + + if (rc < 0) + die_perror("vhost-user message send"); + + if ((uint32_t)rc < VHOST_USER_HDR_SIZE + vmsg->hdr.size) + die("EOF on vhost-user message send"); +} + +/** + * vu_send_reply() - Update message flags and send it to front-end + * @conn_fd: vhost-user command socket + * @vmsg: vhost-user message + */ +static void vu_send_reply(int conn_fd, struct vhost_user_msg *msg) +{ + msg->hdr.flags &= ~VHOST_USER_VERSION_MASK; + msg->hdr.flags |= VHOST_USER_VERSION; + msg->hdr.flags |= VHOST_USER_REPLY_MASK; + + vu_message_write(conn_fd, msg); +} + +/** + * vu_get_features_exec() - Provide back-end features bitmask to front-end + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: True as a reply is requested + */ +static bool vu_get_features_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + uint64_t features = + 1ULL << VIRTIO_F_VERSION_1 | + 1ULL << VIRTIO_NET_F_MRG_RXBUF | + 1ULL << VHOST_USER_F_PROTOCOL_FEATURES; + + (void)vdev; + + vmsg_set_reply_u64(msg, features); + + debug("Sending back to guest u64: 0x%016"PRIx64, msg->payload.u64); + + return true; +} + +/** + * vu_set_enable_all_rings() - Enable/disable all the virtqueues + * @vdev: vhost-user device + * @enable: New virtqueues state + */ +static void vu_set_enable_all_rings(struct vu_dev *vdev, bool enable) +{ + uint16_t i; + + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) + vdev->vq[i].enable = enable; +} + +/** + * vu_set_features_exec() - Enable features of the back-end + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_features_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + debug("u64: 0x%016"PRIx64, msg->payload.u64); + + vdev->features = msg->payload.u64; + /* We only support devices conforming to VIRTIO 1.0 or + * later + */ + if (!vu_has_feature(vdev, VIRTIO_F_VERSION_1)) + die("virtio legacy devices aren't supported by passt"); + + if (!vu_has_feature(vdev, VHOST_USER_F_PROTOCOL_FEATURES)) + vu_set_enable_all_rings(vdev, true); + + return false; +} + +/** + * vu_set_owner_exec() - Session start flag, do nothing in our case + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_owner_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + (void)vdev; + (void)msg; + + return false; +} + +/** + * map_ring() - Convert ring front-end (QEMU) addresses to our process + * virtual address space. + * @vdev: vhost-user device + * @vq: Virtqueue + * + * Return: True if ring cannot be mapped to our address space + */ +static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq) +{ + vq->vring.desc = qva_to_va(vdev, vq->vra.desc_user_addr); + vq->vring.used = qva_to_va(vdev, vq->vra.used_user_addr); + vq->vring.avail = qva_to_va(vdev, vq->vra.avail_user_addr); + + debug("Setting virtq addresses:"); + debug(" vring_desc at %p", (void *)vq->vring.desc); + debug(" vring_used at %p", (void *)vq->vring.used); + debug(" vring_avail at %p", (void *)vq->vring.avail); + + return !(vq->vring.desc && vq->vring.used && vq->vring.avail); +} + +/** + * vu_packet_check_range() - Check if a given memory zone is contained in + * a mapped guest memory region + * @buf: Array of the available memory regions + * @offset: Offset of data range in packet descriptor + * @size: Length of desired data range + * @start: Start of the packet descriptor + * + * Return: 0 if the zone is in a mapped memory region, -1 otherwise + */ +/* cppcheck-suppress unusedFunction */ +int vu_packet_check_range(void *buf, size_t offset, size_t len, + const char *start) +{ + struct vu_dev_region *dev_region; + + for (dev_region = buf; dev_region->mmap_addr; dev_region++) { + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */ + char *m = (char *)dev_region->mmap_addr; + + if (m <= start && + start + offset + len <= m + dev_region->mmap_offset + + dev_region->size) + return 0; + } + + return -1; +} + +/** + * vu_set_mem_table_exec() - Sets the memory map regions to be able to + * translate the vring addresses. + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + * + * #syscalls:vu mmap munmap + */ +static bool vu_set_mem_table_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + struct vhost_user_memory m = msg->payload.memory, *memory = &m; + unsigned int i; + + for (i = 0; i < vdev->nregions; i++) { + struct vu_dev_region *r = &vdev->regions[i]; + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */ + void *mm = (void *)r->mmap_addr; + + if (mm) + munmap(mm, r->size + r->mmap_offset); + } + vdev->nregions = memory->nregions; + + debug("vhost-user nregions: %u", memory->nregions); + for (i = 0; i < vdev->nregions; i++) { + struct vhost_user_memory_region *msg_region = &memory->regions[i]; + struct vu_dev_region *dev_region = &vdev->regions[i]; + void *mmap_addr; + + debug("vhost-user region %d", i); + debug(" guest_phys_addr: 0x%016"PRIx64, + msg_region->guest_phys_addr); + debug(" memory_size: 0x%016"PRIx64, + msg_region->memory_size); + debug(" userspace_addr 0x%016"PRIx64, + msg_region->userspace_addr); + debug(" mmap_offset 0x%016"PRIx64, + msg_region->mmap_offset); + + dev_region->gpa = msg_region->guest_phys_addr; + dev_region->size = msg_region->memory_size; + dev_region->qva = msg_region->userspace_addr; + dev_region->mmap_offset = msg_region->mmap_offset; + + /* We don't use offset argument of mmap() since the + * mapped address has to be page aligned. + */ + mmap_addr = mmap(0, dev_region->size + dev_region->mmap_offset, + PROT_READ | PROT_WRITE, MAP_SHARED | + MAP_NORESERVE, msg->fds[i], 0); + + if (mmap_addr == MAP_FAILED) + die_perror("vhost-user region mmap error"); + + dev_region->mmap_addr = (uint64_t)(uintptr_t)mmap_addr; + debug(" mmap_addr: 0x%016"PRIx64, + dev_region->mmap_addr); + + close(msg->fds[i]); + } + + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { + if (vdev->vq[i].vring.desc) { + if (map_ring(vdev, &vdev->vq[i])) + die("remapping queue %d during setmemtable", i); + } + } + + return false; +} + +/** + * vu_set_vring_num_exec() - Set the size of the queue (vring size) + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_num_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + unsigned int idx = msg->payload.state.index; + unsigned int num = msg->payload.state.num; + + debug("State.index: %u", idx); + debug("State.num: %u", num); + vdev->vq[idx].vring.num = num; + + return false; +} + +/** + * vu_set_vring_addr_exec() - Set the addresses of the vring + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_addr_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + /* We need to copy the payload to vhost_vring_addr structure + * to access index because address of msg->payload.addr + * can be unaligned as it is packed. + */ + struct vhost_vring_addr addr = msg->payload.addr; + struct vu_virtq *vq = &vdev->vq[addr.index]; + + debug("vhost_vring_addr:"); + debug(" index: %d", addr.index); + debug(" flags: %d", addr.flags); + debug(" desc_user_addr: 0x%016" PRIx64, + (uint64_t)addr.desc_user_addr); + debug(" used_user_addr: 0x%016" PRIx64, + (uint64_t)addr.used_user_addr); + debug(" avail_user_addr: 0x%016" PRIx64, + (uint64_t)addr.avail_user_addr); + debug(" log_guest_addr: 0x%016" PRIx64, + (uint64_t)addr.log_guest_addr); + + vq->vra = msg->payload.addr; + vq->vring.flags = addr.flags; + vq->vring.log_guest_addr = addr.log_guest_addr; + + if (map_ring(vdev, vq)) + die("Invalid vring_addr message"); + + vq->used_idx = le16toh(vq->vring.used->idx); + + if (vq->last_avail_idx != vq->used_idx) { + debug("Last avail index != used index: %u != %u", + vq->last_avail_idx, vq->used_idx); + } + + return false; +} +/** + * vu_set_vring_base_exec() - Sets the next index to use for descriptors + * in this vring + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_base_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + unsigned int idx = msg->payload.state.index; + unsigned int num = msg->payload.state.num; + + debug("State.index: %u", idx); + debug("State.num: %u", num); + vdev->vq[idx].shadow_avail_idx = vdev->vq[idx].last_avail_idx = num; + + return false; +} + +/** + * vu_get_vring_base_exec() - Stops the vring and returns the current + * descriptor index or indices + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: True as a reply is requested + */ +static bool vu_get_vring_base_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + unsigned int idx = msg->payload.state.index; + + debug("State.index: %u", idx); + msg->payload.state.num = vdev->vq[idx].last_avail_idx; + msg->hdr.size = sizeof(msg->payload.state); + + vdev->vq[idx].started = false; + + if (vdev->vq[idx].call_fd != -1) { + close(vdev->vq[idx].call_fd); + vdev->vq[idx].call_fd = -1; + } + if (vdev->vq[idx].kick_fd != -1) { + vu_remove_watch(vdev, vdev->vq[idx].kick_fd); + close(vdev->vq[idx].kick_fd); + vdev->vq[idx].kick_fd = -1; + } + + return true; +} + +/** + * vu_set_watch() - Add a file descriptor to the passt epoll file descriptor + * @vdev: vhost-user device + * @fd: file descriptor to add + */ +static void vu_set_watch(const struct vu_dev *vdev, int fd) +{ + /* Placeholder to add passt related code */ + (void)vdev; + (void)fd; +} + +/** + * vu_wait_queue() - wait for new free entries in the virtqueue + * @vq: virtqueue to wait on + */ +static int vu_wait_queue(const struct vu_virtq *vq) +{ + eventfd_t kick_data; + ssize_t rc; + int status; + + /* wait for the kernel to put new entries in the queue */ + status = fcntl(vq->kick_fd, F_GETFL); + if (status == -1) + return -1; + + if (fcntl(vq->kick_fd, F_SETFL, status & ~O_NONBLOCK)) + return -1; + + rc = eventfd_read(vq->kick_fd, &kick_data); + + if (fcntl(vq->kick_fd, F_SETFL, status)) + return -1; + + if (rc == -1) + return -1; + + return 0; +} + +/** + * vu_send() - Send a buffer to the front-end using the RX virtqueue + * @vdev: vhost-user device + * @buf: address of the buffer + * @size: size of the buffer + * + * Return: number of bytes sent, -1 if there is an error + */ +/* cppcheck-suppress unusedFunction */ +int vu_send(struct vu_dev *vdev, const void *buf, size_t size) +{ + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; + struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; + size_t lens[VIRTQUEUE_MAX_SIZE]; + __virtio16 *num_buffers_ptr = NULL; + size_t hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf); + int in_sg_count = 0; + size_t offset = 0; + int i = 0, j; + + debug("vu_send size %zu hdrlen %zu", size, hdrlen); + + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { + err("Got packet, but no available descriptors on RX virtq."); + return 0; + } + + while (offset < size) { + size_t len; + int total; + int ret; + + total = 0; + + if (i == ARRAY_SIZE(elem) || + in_sg_count == ARRAY_SIZE(in_sg)) { + err("virtio-net unexpected long buffer chain"); + goto err; + } + + elem[i].out_num = 0; + elem[i].out_sg = NULL; + elem[i].in_num = ARRAY_SIZE(in_sg) - in_sg_count; + elem[i].in_sg = &in_sg[in_sg_count]; + + ret = vu_queue_pop(vdev, vq, &elem[i]); + if (ret < 0) { + if (vu_wait_queue(vq) != -1) + continue; + if (i) { + err("virtio-net unexpected empty queue: " + "i %d mergeable %d offset %zd, size %zd, " + "features 0x%" PRIx64, + i, vu_has_feature(vdev, + VIRTIO_NET_F_MRG_RXBUF), + offset, size, vdev->features); + } + offset = -1; + goto err; + } + in_sg_count += elem[i].in_num; + + if (elem[i].in_num < 1) { + err("virtio-net receive queue contains no in buffers"); + vu_queue_detach_element(vq); + offset = -1; + goto err; + } + + if (i == 0) { + struct virtio_net_hdr hdr = VU_HEADER; + + ASSERT(offset == 0); + ASSERT(elem[i].in_sg[0].iov_len >= hdrlen); + + len = iov_from_buf(elem[i].in_sg, elem[i].in_num, 0, + &hdr, sizeof(hdr)); + + num_buffers_ptr = (__virtio16 *)((char *)elem[i].in_sg[0].iov_base + + len); + + total += hdrlen; + } + + len = iov_from_buf(elem[i].in_sg, elem[i].in_num, total, + (char *)buf + offset, size - offset); + + total += len; + offset += len; + + /* If buffers can't be merged, at this point we + * must have consumed the complete packet. + * Otherwise, drop it. + */ + if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) && + offset < size) { + vu_queue_unpop(vq); + goto err; + } + + lens[i] = total; + i++; + } + + if (num_buffers_ptr && vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + *num_buffers_ptr = htole16(i); + + for (j = 0; j < i; j++) { + debug("filling total %zd idx %d", lens[j], j); + vu_queue_fill(vq, &elem[j], lens[j], j); + } + + vu_queue_flush(vq, i); + vu_queue_notify(vdev, vq); + + debug("vhost-user sent %zu", offset); + + return offset; +err: + for (j = 0; j < i; j++) + vu_queue_detach_element(vq); + + return offset; +} + +/** + * vu_handle_tx() - Receive data from the TX virtqueue + * @vdev: vhost-user device + * @index: index of the virtqueue + * @now: Current timestamp + */ +static void vu_handle_tx(struct vu_dev *vdev, int index, + const struct timespec *now) +{ + struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; + struct iovec out_sg[VIRTQUEUE_MAX_SIZE]; + struct vu_virtq *vq = &vdev->vq[index]; + int hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf); + int out_sg_count; + int count; + + if (!VHOST_USER_IS_QUEUE_TX(index)) { + debug("vhost-user: index %d is not a TX queue", index); + return; + } + + tap_flush_pools(); + + count = 0; + out_sg_count = 0; + while (count < VIRTQUEUE_MAX_SIZE) { + int ret; + + elem[count].out_num = 1; + elem[count].out_sg = &out_sg[out_sg_count]; + elem[count].in_num = 0; + elem[count].in_sg = NULL; + ret = vu_queue_pop(vdev, vq, &elem[count]); + if (ret < 0) + break; + out_sg_count += elem[count].out_num; + + if (elem[count].out_num < 1) { + debug("virtio-net header not in first element"); + break; + } + ASSERT(elem[count].out_num == 1); + + tap_add_packet(vdev->context, + elem[count].out_sg[0].iov_len - hdrlen, + (char *)elem[count].out_sg[0].iov_base + hdrlen); + count++; + } + tap_handler(vdev->context, now); + + if (count) { + int i; + + for (i = 0; i < count; i++) + vu_queue_fill(vq, &elem[i], 0, i); + vu_queue_flush(vq, count); + vu_queue_notify(vdev, vq); + } +} + +/** + * vu_kick_cb() - Called on a kick event to start to receive data + * @vdev: vhost-user device + * @ref: epoll reference information + * @now: Current timestamp + */ +/* cppcheck-suppress unusedFunction */ +void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, + const struct timespec *now) +{ + eventfd_t kick_data; + ssize_t rc; + int idx; + + for (idx = 0; idx < VHOST_USER_MAX_QUEUES; idx++) { + if (vdev->vq[idx].kick_fd == ref.fd) + break; + } + + if (idx == VHOST_USER_MAX_QUEUES) + return; + + rc = eventfd_read(ref.fd, &kick_data); + if (rc == -1) + die_perror("vhost-user kick eventfd_read()"); + + debug("vhost-user: ot kick_data: %016"PRIx64" idx:%d", + kick_data, idx); + if (VHOST_USER_IS_QUEUE_TX(idx)) + vu_handle_tx(vdev, idx, now); +} + +/** + * vu_check_queue_msg_file() - Check if a message is valid, + * close fds if NOFD bit is set + * @vmsg: vhost-user message + */ +static void vu_check_queue_msg_file(struct vhost_user_msg *msg) +{ + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + + if (idx >= VHOST_USER_MAX_QUEUES) + die("Invalid vhost-user queue index: %u", idx); + + if (nofd) { + vmsg_close_fds(msg); + return; + } + + if (msg->fd_num != 1) + die("Invalid fds in vhost-user request: %d", msg->hdr.request); +} + +/** + * vu_set_vring_kick_exec() - Set the event file descriptor for adding buffers + * to the vring + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_kick_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + + debug("u64: 0x%016"PRIx64, msg->payload.u64); + + vu_check_queue_msg_file(msg); + + if (vdev->vq[idx].kick_fd != -1) { + vu_remove_watch(vdev, vdev->vq[idx].kick_fd); + close(vdev->vq[idx].kick_fd); + vdev->vq[idx].kick_fd = -1; + } + + if (!nofd) + vdev->vq[idx].kick_fd = msg->fds[0]; + + debug("Got kick_fd: %d for vq: %d", vdev->vq[idx].kick_fd, idx); + + vdev->vq[idx].started = true; + + if (vdev->vq[idx].kick_fd != -1 && VHOST_USER_IS_QUEUE_TX(idx)) { + vu_set_watch(vdev, vdev->vq[idx].kick_fd); + debug("Waiting for kicks on fd: %d for vq: %d", + vdev->vq[idx].kick_fd, idx); + } + + return false; +} + +/** + * vu_set_vring_call_exec() - Set the event file descriptor to signal when + * buffers are used + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_call_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + + debug("u64: 0x%016"PRIx64, msg->payload.u64); + + vu_check_queue_msg_file(msg); + + if (vdev->vq[idx].call_fd != -1) { + close(vdev->vq[idx].call_fd); + vdev->vq[idx].call_fd = -1; + } + + if (!nofd) + vdev->vq[idx].call_fd = msg->fds[0]; + + /* in case of I/O hang after reconnecting */ + if (vdev->vq[idx].call_fd != -1) + eventfd_write(msg->fds[0], 1); + + debug("Got call_fd: %d for vq: %d", vdev->vq[idx].call_fd, idx); + + return false; +} + +/** + * vu_set_vring_err_exec() - Set the event file descriptor to signal when + * error occurs + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_err_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + bool nofd = msg->payload.u64 & VHOST_USER_VRING_NOFD_MASK; + int idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK; + + debug("u64: 0x%016"PRIx64, msg->payload.u64); + + vu_check_queue_msg_file(msg); + + if (vdev->vq[idx].err_fd != -1) { + close(vdev->vq[idx].err_fd); + vdev->vq[idx].err_fd = -1; + } + + if (!nofd) + vdev->vq[idx].err_fd = msg->fds[0]; + + return false; +} + +/** + * vu_get_protocol_features_exec() - Provide the protocol (vhost-user) features + * to the front-end + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: True as a reply is requested + */ +static bool vu_get_protocol_features_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + uint64_t features = 1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK; + + (void)vdev; + vmsg_set_reply_u64(msg, features); + + return true; +} + +/** + * vu_set_protocol_features_exec() - Enable protocol (vhost-user) features + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_protocol_features_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + uint64_t features = msg->payload.u64; + + debug("u64: 0x%016"PRIx64, features); + + vdev->protocol_features = msg->payload.u64; + + return false; +} + +/** + * vu_get_queue_num_exec() - Tell how many queues we support + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: True as a reply is requested + */ +static bool vu_get_queue_num_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + (void)vdev; + + vmsg_set_reply_u64(msg, VHOST_USER_MAX_QUEUES); + + return true; +} + +/** + * vu_set_vring_enable_exec() - Enable or disable corresponding vring + * @vdev: vhost-user device + * @vmsg: vhost-user message + * + * Return: False as no reply is requested + */ +static bool vu_set_vring_enable_exec(struct vu_dev *vdev, + struct vhost_user_msg *msg) +{ + unsigned int enable = msg->payload.state.num; + unsigned int idx = msg->payload.state.index; + + debug("State.index: %u", idx); + debug("State.enable: %u", enable); + + if (idx >= VHOST_USER_MAX_QUEUES) + die("Invalid vring_enable index: %u", idx); + + vdev->vq[idx].enable = enable; + return false; +} + +/** + * vu_init() - Initialize vhost-user device structure + * @c: execution context + * @vdev: vhost-user device + */ +/* cppcheck-suppress unusedFunction */ +void vu_init(struct ctx *c, struct vu_dev *vdev) +{ + int i; + + vdev->context = c; + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { + vdev->vq[i] = (struct vu_virtq){ + .call_fd = -1, + .kick_fd = -1, + .err_fd = -1, + .notification = true, + }; + } +} + +/** + * vu_cleanup() - Reset vhost-user device + * @vdev: vhost-user device + */ +/* cppcheck-suppress unusedFunction */ +void vu_cleanup(struct vu_dev *vdev) +{ + unsigned int i; + + for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { + struct vu_virtq *vq = &vdev->vq[i]; + + vq->started = false; + vq->notification = true; + + if (vq->call_fd != -1) { + close(vq->call_fd); + vq->call_fd = -1; + } + if (vq->err_fd != -1) { + close(vq->err_fd); + vq->err_fd = -1; + } + if (vq->kick_fd != -1) { + vu_remove_watch(vdev, vq->kick_fd); + close(vq->kick_fd); + vq->kick_fd = -1; + } + + vq->vring.desc = 0; + vq->vring.used = 0; + vq->vring.avail = 0; + } + + for (i = 0; i < vdev->nregions; i++) { + const struct vu_dev_region *r = &vdev->regions[i]; + /* NOLINTNEXTLINE(performance-no-int-to-ptr) */ + void *m = (void *)r->mmap_addr; + + if (m) + munmap(m, r->size + r->mmap_offset); + } + vdev->nregions = 0; +} + +/** + * vu_sock_reset() - Reset connection socket + * @vdev: vhost-user device + */ +static void vu_sock_reset(struct vu_dev *vdev) +{ + /* Placeholder to add passt related code */ + (void)vdev; +} + +static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, + struct vhost_user_msg *msg) = { + [VHOST_USER_GET_FEATURES] = vu_get_features_exec, + [VHOST_USER_SET_FEATURES] = vu_set_features_exec, + [VHOST_USER_GET_PROTOCOL_FEATURES] = vu_get_protocol_features_exec, + [VHOST_USER_SET_PROTOCOL_FEATURES] = vu_set_protocol_features_exec, + [VHOST_USER_GET_QUEUE_NUM] = vu_get_queue_num_exec, + [VHOST_USER_SET_OWNER] = vu_set_owner_exec, + [VHOST_USER_SET_MEM_TABLE] = vu_set_mem_table_exec, + [VHOST_USER_SET_VRING_NUM] = vu_set_vring_num_exec, + [VHOST_USER_SET_VRING_ADDR] = vu_set_vring_addr_exec, + [VHOST_USER_SET_VRING_BASE] = vu_set_vring_base_exec, + [VHOST_USER_GET_VRING_BASE] = vu_get_vring_base_exec, + [VHOST_USER_SET_VRING_KICK] = vu_set_vring_kick_exec, + [VHOST_USER_SET_VRING_CALL] = vu_set_vring_call_exec, + [VHOST_USER_SET_VRING_ERR] = vu_set_vring_err_exec, + [VHOST_USER_SET_VRING_ENABLE] = vu_set_vring_enable_exec, +}; + +/** + * vu_control_handler() - Handle control commands for vhost-user + * @vdev: vhost-user device + * @fd: vhost-user message socket + * @events: epoll events + */ +/* cppcheck-suppress unusedFunction */ +void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events) +{ + struct vhost_user_msg msg = { 0 }; + bool need_reply, reply_requested; + int ret; + + if (events & (EPOLLRDHUP | EPOLLHUP | EPOLLERR)) { + vu_sock_reset(vdev); + return; + } + + ret = vu_message_read_default(fd, &msg); + if (ret == 0) { + vu_sock_reset(vdev); + return; + } + debug("================ Vhost user message ================"); + debug("Request: %s (%d)", vu_request_to_string(msg.hdr.request), + msg.hdr.request); + debug("Flags: 0x%x", msg.hdr.flags); + debug("Size: %u", msg.hdr.size); + + need_reply = msg.hdr.flags & VHOST_USER_NEED_REPLY_MASK; + + if (msg.hdr.request >= 0 && msg.hdr.request < VHOST_USER_MAX && + vu_handle[msg.hdr.request]) + reply_requested = vu_handle[msg.hdr.request](vdev, &msg); + else + die("Unhandled request: %d", msg.hdr.request); + + /* cppcheck-suppress legacyUninitvar */ + if (!reply_requested && need_reply) { + msg.payload.u64 = 0; + msg.hdr.flags = 0; + msg.hdr.size = sizeof(msg.payload.u64); + msg.fd_num = 0; + reply_requested = true; + } + + if (reply_requested) + vu_send_reply(fd, &msg); +} diff --git a/vhost_user.h b/vhost_user.h new file mode 100644 index 000000000000..17da11aef428 --- /dev/null +++ b/vhost_user.h @@ -0,0 +1,209 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * vhost-user API, command management and virtio interface + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +/* some parts from subprojects/libvhost-user/libvhost-user.h */ + +#ifndef VHOST_USER_H +#define VHOST_USER_H + +#include "virtio.h" +#include "iov.h" + +#define VHOST_USER_F_PROTOCOL_FEATURES 30 + +#define VHOST_MEMORY_BASELINE_NREGIONS 8 + +/** + * enum vhost_user_protocol_feature - List of available vhost-user features + */ +enum vhost_user_protocol_feature { + VHOST_USER_PROTOCOL_F_MQ = 0, + VHOST_USER_PROTOCOL_F_LOG_SHMFD = 1, + VHOST_USER_PROTOCOL_F_RARP = 2, + VHOST_USER_PROTOCOL_F_REPLY_ACK = 3, + VHOST_USER_PROTOCOL_F_NET_MTU = 4, + VHOST_USER_PROTOCOL_F_BACKEND_REQ = 5, + VHOST_USER_PROTOCOL_F_CROSS_ENDIAN = 6, + VHOST_USER_PROTOCOL_F_CRYPTO_SESSION = 7, + VHOST_USER_PROTOCOL_F_PAGEFAULT = 8, + VHOST_USER_PROTOCOL_F_CONFIG = 9, + VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD = 10, + VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11, + VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12, + VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS = 14, + VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS = 15, + + VHOST_USER_PROTOCOL_F_MAX +}; + +/** + * enum vhost_user_request - List of available vhost-user requests + */ +enum vhost_user_request { + VHOST_USER_NONE = 0, + VHOST_USER_GET_FEATURES = 1, + VHOST_USER_SET_FEATURES = 2, + VHOST_USER_SET_OWNER = 3, + VHOST_USER_RESET_OWNER = 4, + VHOST_USER_SET_MEM_TABLE = 5, + VHOST_USER_SET_LOG_BASE = 6, + VHOST_USER_SET_LOG_FD = 7, + VHOST_USER_SET_VRING_NUM = 8, + VHOST_USER_SET_VRING_ADDR = 9, + VHOST_USER_SET_VRING_BASE = 10, + VHOST_USER_GET_VRING_BASE = 11, + VHOST_USER_SET_VRING_KICK = 12, + VHOST_USER_SET_VRING_CALL = 13, + VHOST_USER_SET_VRING_ERR = 14, + VHOST_USER_GET_PROTOCOL_FEATURES = 15, + VHOST_USER_SET_PROTOCOL_FEATURES = 16, + VHOST_USER_GET_QUEUE_NUM = 17, + VHOST_USER_SET_VRING_ENABLE = 18, + VHOST_USER_SEND_RARP = 19, + VHOST_USER_NET_SET_MTU = 20, + VHOST_USER_SET_BACKEND_REQ_FD = 21, + VHOST_USER_IOTLB_MSG = 22, + VHOST_USER_SET_VRING_ENDIAN = 23, + VHOST_USER_GET_CONFIG = 24, + VHOST_USER_SET_CONFIG = 25, + VHOST_USER_CREATE_CRYPTO_SESSION = 26, + VHOST_USER_CLOSE_CRYPTO_SESSION = 27, + VHOST_USER_POSTCOPY_ADVISE = 28, + VHOST_USER_POSTCOPY_LISTEN = 29, + VHOST_USER_POSTCOPY_END = 30, + VHOST_USER_GET_INFLIGHT_FD = 31, + VHOST_USER_SET_INFLIGHT_FD = 32, + VHOST_USER_GPU_SET_SOCKET = 33, + VHOST_USER_VRING_KICK = 35, + VHOST_USER_GET_MAX_MEM_SLOTS = 36, + VHOST_USER_ADD_MEM_REG = 37, + VHOST_USER_REM_MEM_REG = 38, + VHOST_USER_MAX +}; + +/** + * struct vhost_user_header - vhost-user message header + * @request: Request type of the message + * @flags: Request flags + * @size: The following payload size + */ +struct vhost_user_header { + enum vhost_user_request request; + +#define VHOST_USER_VERSION_MASK 0x3 +#define VHOST_USER_REPLY_MASK (0x1 << 2) +#define VHOST_USER_NEED_REPLY_MASK (0x1 << 3) + uint32_t flags; + uint32_t size; +} __attribute__ ((__packed__)); + +/** + * struct vhost_user_memory_region - Front-end shared memory region information + * @guest_phys_addr: Guest physical address of the region + * @memory_size: Memory size + * @userspace_addr: front-end (QEMU) userspace address + * @mmap_offset: region offset in the shared memory area + */ +struct vhost_user_memory_region { + uint64_t guest_phys_addr; + uint64_t memory_size; + uint64_t userspace_addr; + uint64_t mmap_offset; +}; + +/** + * struct vhost_user_memory - List of all the shared memory regions + * @nregions: Number of memory regions + * @padding: Padding + * @regions: Memory regions list + */ +struct vhost_user_memory { + uint32_t nregions; + uint32_t padding; + struct vhost_user_memory_region regions[VHOST_MEMORY_BASELINE_NREGIONS]; +}; + +/** + * union vhost_user_payload - vhost-user message payload + * @u64: 64-bit payload + * @state: vring state payload + * @addr: vring addresses payload + * vhost_user_memory: Memory regions information payload + */ +union vhost_user_payload { +#define VHOST_USER_VRING_IDX_MASK 0xff +#define VHOST_USER_VRING_NOFD_MASK (0x1 << 8) + uint64_t u64; + struct vhost_vring_state state; + struct vhost_vring_addr addr; + struct vhost_user_memory memory; +}; + +/** + * struct vhost_user_msg - vhost-use message + * @hdr: Message header + * @payload: Message payload + * @fds: File descriptors associated with the message + * in the ancillary data. + * (shared memory or event file descriptors) + * @fd_num: Number of file descriptors + */ +struct vhost_user_msg { + struct vhost_user_header hdr; + union vhost_user_payload payload; + + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; + int fd_num; +} __attribute__ ((__packed__)); +#define VHOST_USER_HDR_SIZE sizeof(struct vhost_user_header) + +/* index of the RX virtqueue */ +#define VHOST_USER_RX_QUEUE 0 +/* index of the TX virtqueue */ +#define VHOST_USER_TX_QUEUE 1 + +/* in case of multiqueue, the RX and TX queues are interleaved */ +#define VHOST_USER_IS_QUEUE_TX(n) (n % 2) +#define VHOST_USER_IS_QUEUE_RX(n) (!(n % 2)) + +/* Default virtio-net header for passt */ +#define VU_HEADER ((struct virtio_net_hdr){ \ + .flags = VIRTIO_NET_HDR_F_DATA_VALID, \ + .gso_type = VIRTIO_NET_HDR_GSO_NONE, \ +}) + +/** + * vu_queue_enabled - Return state of a virtqueue + * @vq: virtqueue to check + * + * Return: true if the virqueue is enabled, false otherwise + */ +static inline bool vu_queue_enabled(const struct vu_virtq *vq) +{ + return vq->enable; +} + +/** + * vu_queue_started - Return state of a virtqueue + * @vq: virtqueue to check + * + * Return: true if the virqueue is started, false otherwise + */ +static inline bool vu_queue_started(const struct vu_virtq *vq) +{ + return vq->started; +} + +int vu_send(struct vu_dev *vdev, const void *buf, size_t size); +void vu_print_capabilities(void); +void vu_init(struct ctx *c, struct vu_dev *vdev); +void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, + const struct timespec *now); +void vu_cleanup(struct vu_dev *vdev); +void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events); +#endif /* VHOST_USER_H */ diff --git a/virtio.c b/virtio.c index 380590afbca3..237395396606 100644 --- a/virtio.c +++ b/virtio.c @@ -328,7 +328,6 @@ static bool vring_can_notify(const struct vu_dev *dev, struct vu_virtq *vq) * @dev: Vhost-user device * @vq: Virtqueue */ -/* cppcheck-suppress unusedFunction */ void vu_queue_notify(const struct vu_dev *dev, struct vu_virtq *vq) { if (!vq->vring.avail) @@ -504,7 +503,6 @@ static int vu_queue_map_desc(struct vu_dev *dev, struct vu_virtq *vq, unsigned i * * Return: -1 if there is an error, 0 otherwise */ -/* cppcheck-suppress unusedFunction */ int vu_queue_pop(struct vu_dev *dev, struct vu_virtq *vq, struct vu_virtq_element *elem) { unsigned int head; @@ -553,7 +551,6 @@ void vu_queue_detach_element(struct vu_virtq *vq) * vu_queue_unpop() - Push back the previously popped element from the virqueue * @vq: Virtqueue */ -/* cppcheck-suppress unusedFunction */ void vu_queue_unpop(struct vu_virtq *vq) { vq->last_avail_idx--; @@ -621,7 +618,6 @@ void vu_queue_fill_by_index(struct vu_virtq *vq, unsigned int index, * @len: Size of the element * @idx: Used ring entry index */ -/* cppcheck-suppress unusedFunction */ void vu_queue_fill(struct vu_virtq *vq, const struct vu_virtq_element *elem, unsigned int len, unsigned int idx) { @@ -645,7 +641,6 @@ static inline void vring_used_idx_set(struct vu_virtq *vq, uint16_t val) * @vq: Virtqueue * @count: Number of entry to flush */ -/* cppcheck-suppress unusedFunction */ void vu_queue_flush(struct vu_virtq *vq, unsigned int count) { uint16_t old, new; diff --git a/virtio.h b/virtio.h index 94efeb049fbc..6410d60f9b3f 100644 --- a/virtio.h +++ b/virtio.h @@ -105,6 +105,7 @@ struct vu_dev_region { * @protocol_features: Vhost-user protocol features */ struct vu_dev { + struct ctx *context; uint32_t nregions; struct vu_dev_region regions[VHOST_USER_MAX_RAM_SLOTS]; struct vu_virtq vq[VHOST_USER_MAX_QUEUES]; -- 2.46.0
add virtio and vhost-user functions to connect with QEMU. $ ./passt --vhost-user and # qemu-system-x86_64 ... -m 4G \ -object memory-backend-memfd,id=memfd0,share=on,size=4G \ -numa node,memdev=memfd0 \ -chardev socket,id=chr0,path=/tmp/passt_1.socket \ -netdev vhost-user,id=netdev0,chardev=chr0 \ -device virtio-net,mac=9a:2b:2c:2d:2e:2f,netdev=netdev0 \ ... Signed-off-by: Laurent Vivier <lvivier(a)redhat.com> --- Makefile | 6 +- checksum.c | 1 - conf.c | 23 +- epoll_type.h | 4 + isolation.c | 17 +- packet.c | 11 + packet.h | 8 +- passt.1 | 10 +- passt.c | 26 +- passt.h | 6 + pcap.c | 1 - tap.c | 111 +++++++-- tap.h | 5 +- tcp.c | 31 ++- tcp_buf.c | 8 +- tcp_internal.h | 3 +- tcp_vu.c | 647 +++++++++++++++++++++++++++++++++++++++++++++++++ tcp_vu.h | 12 + udp.c | 78 +++--- udp.h | 8 +- udp_internal.h | 34 +++ udp_vu.c | 397 ++++++++++++++++++++++++++++++ udp_vu.h | 13 + vhost_user.c | 32 +-- virtio.c | 1 - vu_common.c | 36 +++ vu_common.h | 34 +++ 27 files changed, 1457 insertions(+), 106 deletions(-) create mode 100644 tcp_vu.c create mode 100644 tcp_vu.h create mode 100644 udp_internal.h create mode 100644 udp_vu.c create mode 100644 udp_vu.h create mode 100644 vu_common.c create mode 100644 vu_common.h diff --git a/Makefile b/Makefile index 0e8ed60a0da1..1e8910dda1f4 100644 --- a/Makefile +++ b/Makefile @@ -54,7 +54,8 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS) PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \ icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \ ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \ - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c + tcp_buf.c tcp_splice.c tcp_vu.c udp.c udp_flow.c udp_vu.c util.c \ + vhost_user.c virtio.c vu_common.c QRAP_SRCS = qrap.c SRCS = $(PASST_SRCS) $(QRAP_SRCS) @@ -64,7 +65,8 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \ flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \ lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \ siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \ - udp.h udp_flow.h util.h vhost_user.h virtio.h + tcp_vu.h udp.h udp_flow.h udp_internal.h udp_vu.h util.h vhost_user.h \ + virtio.h vu_common.h HEADERS = $(PASST_HEADERS) seccomp.h C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 }; diff --git a/checksum.c b/checksum.c index 006614fcbb28..aa5b7ae1cb66 100644 --- a/checksum.c +++ b/checksum.c @@ -501,7 +501,6 @@ uint16_t csum(const void *buf, size_t len, uint32_t init) * * Return: 16-bit folded, complemented checksum */ -/* cppcheck-suppress unusedFunction */ uint16_t csum_iov(const struct iovec *iov, size_t n, uint32_t init) { unsigned int i; diff --git a/conf.c b/conf.c index b27588649af3..eb8e1685713a 100644 --- a/conf.c +++ b/conf.c @@ -45,6 +45,7 @@ #include "lineread.h" #include "isolation.h" #include "log.h" +#include "vhost_user.h" /** * next_chunk - Return the next piece of a string delimited by a character @@ -769,9 +770,14 @@ static void usage(const char *name, FILE *f, int status) " default: same interface name as external one\n"); } else { fprintf(f, - " -s, --socket PATH UNIX domain socket path\n" + " -s, --socket, --socket-path PATH UNIX domain socket path\n" " default: probe free path starting from " UNIX_SOCK_PATH "\n", 1); + fprintf(f, + " --vhost-user Enable vhost-user mode\n" + " UNIX domain socket is provided by -s option\n" + " --print-capabilities print back-end capabilities in JSON format,\n" + " only meaningful for vhost-user mode\n"); } fprintf(f, @@ -1291,6 +1297,10 @@ void conf(struct ctx *c, int argc, char **argv) {"netns-only", no_argument, NULL, 20 }, {"map-host-loopback", required_argument, NULL, 21 }, {"map-guest-addr", required_argument, NULL, 22 }, + {"vhost-user", no_argument, NULL, 23 }, + /* vhost-user backend program convention */ + {"print-capabilities", no_argument, NULL, 24 }, + {"socket-path", required_argument, NULL, 's' }, { 0 }, }; const char *logname = (c->mode == MODE_PASTA) ? "pasta" : "passt"; @@ -1429,7 +1439,6 @@ void conf(struct ctx *c, int argc, char **argv) sizeof(c->ip6.ifname_out), "%s", optarg); if (ret <= 0 || ret >= (int)sizeof(c->ip6.ifname_out)) die("Invalid interface name: %s", optarg); - break; case 17: if (c->mode != MODE_PASTA) @@ -1468,6 +1477,16 @@ void conf(struct ctx *c, int argc, char **argv) conf_nat(optarg, &c->ip4.map_guest_addr, &c->ip6.map_guest_addr, NULL); break; + case 23: + if (c->mode == MODE_PASTA) { + err("--vhost-user is for passt mode only"); + usage(argv[0], stdout, EXIT_SUCCESS); + } + c->mode = MODE_VU; + break; + case 24: + vu_print_capabilities(); + break; case 'd': c->debug = 1; c->quiet = 0; diff --git a/epoll_type.h b/epoll_type.h index 0ad1efa0ccec..f3ef41584757 100644 --- a/epoll_type.h +++ b/epoll_type.h @@ -36,6 +36,10 @@ enum epoll_type { EPOLL_TYPE_TAP_PASST, /* socket listening for qemu socket connections */ EPOLL_TYPE_TAP_LISTEN, + /* vhost-user command socket */ + EPOLL_TYPE_VHOST_CMD, + /* vhost-user kick event socket */ + EPOLL_TYPE_VHOST_KICK, EPOLL_NUM_TYPES, }; diff --git a/isolation.c b/isolation.c index 45fba1e68b9d..3d5fd60fde46 100644 --- a/isolation.c +++ b/isolation.c @@ -377,14 +377,21 @@ void isolate_postfork(const struct ctx *c) { struct sock_fprog prog; - prctl(PR_SET_DUMPABLE, 0); + //prctl(PR_SET_DUMPABLE, 0); - if (c->mode == MODE_PASTA) { - prog.len = (unsigned short)ARRAY_SIZE(filter_pasta); - prog.filter = filter_pasta; - } else { + switch (c->mode) { + case MODE_PASST: prog.len = (unsigned short)ARRAY_SIZE(filter_passt); prog.filter = filter_passt; + break; + case MODE_PASTA: + prog.len = (unsigned short)ARRAY_SIZE(filter_pasta); + prog.filter = filter_pasta; + break; + case MODE_VU: + prog.len = (unsigned short)ARRAY_SIZE(filter_vu); + prog.filter = filter_vu; + break; } if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) || diff --git a/packet.c b/packet.c index 37489961a37e..e5a78d079231 100644 --- a/packet.c +++ b/packet.c @@ -36,6 +36,17 @@ static int packet_check_range(const struct pool *p, size_t offset, size_t len, const char *start, const char *func, int line) { + if (p->buf_size == 0) { + int ret; + + ret = vu_packet_check_range((void *)p->buf, offset, len, start); + + if (ret == -1) + trace("cannot find region, %s:%i", func, line); + + return ret; + } + if (start < p->buf) { trace("packet start %p before buffer start %p, " "%s:%i", (void *)start, (void *)p->buf, func, line); diff --git a/packet.h b/packet.h index 8377dcf678bb..3f70e949c066 100644 --- a/packet.h +++ b/packet.h @@ -8,8 +8,10 @@ /** * struct pool - Generic pool of packets stored in a buffer - * @buf: Buffer storing packet descriptors - * @buf_size: Total size of buffer + * @buf: Buffer storing packet descriptors, + * a struct vu_dev_region array for passt vhost-user mode + * @buf_size: Total size of buffer, + * 0 for passt vhost-user mode * @size: Number of usable descriptors for the pool * @count: Number of used descriptors for the pool * @pkt: Descriptors: see macros below @@ -22,6 +24,8 @@ struct pool { struct iovec pkt[1]; }; +int vu_packet_check_range(void *buf, size_t offset, size_t len, + const char *start); void packet_add_do(struct pool *p, size_t len, const char *start, const char *func, int line); void *packet_get_do(const struct pool *p, const size_t idx, diff --git a/passt.1 b/passt.1 index 79d134dbe098..822714147be8 100644 --- a/passt.1 +++ b/passt.1 @@ -378,12 +378,20 @@ interface address are configured on a given host interface. .SS \fBpasst\fR-only options .TP -.BR \-s ", " \-\-socket " " \fIpath +.BR \-s ", " \-\-socket-path ", " \-\-socket " " \fIpath Path for UNIX domain socket used by \fBqemu\fR(1) or \fBqrap\fR(1) to connect to \fBpasst\fR. Default is to probe a free socket, not accepting connections, starting from \fI/tmp/passt_1.socket\fR to \fI/tmp/passt_64.socket\fR. +.TP +.BR \-\-vhost-user +Enable vhost-user. The vhost-user command socket is provided by \fB--socket\fR. + +.TP +.BR \-\-print-capabilities +Print back-end capabilities in JSON format, only meaningful for vhost-user mode. + .TP .BR \-F ", " \-\-fd " " \fIFD Pass a pre-opened, connected socket to \fBpasst\fR. Usually the socket is opened diff --git a/passt.c b/passt.c index ad6f0bc32df6..b64efeaf346c 100644 --- a/passt.c +++ b/passt.c @@ -74,6 +74,8 @@ char *epoll_type_str[] = { [EPOLL_TYPE_TAP_PASTA] = "/dev/net/tun device", [EPOLL_TYPE_TAP_PASST] = "connected qemu socket", [EPOLL_TYPE_TAP_LISTEN] = "listening qemu socket", + [EPOLL_TYPE_VHOST_CMD] = "vhost-user command socket", + [EPOLL_TYPE_VHOST_KICK] = "vhost-user kick socket", }; static_assert(ARRAY_SIZE(epoll_type_str) == EPOLL_NUM_TYPES, "epoll_type_str[] doesn't match enum epoll_type"); @@ -206,6 +208,7 @@ int main(int argc, char **argv) struct rlimit limit; struct timespec now; struct sigaction sa; + struct vu_dev vdev; clock_gettime(CLOCK_MONOTONIC, &log_start); @@ -262,6 +265,8 @@ int main(int argc, char **argv) pasta_netns_quit_init(&c); tap_sock_init(&c); + if (c.mode == MODE_VU) + vu_init(&c, &vdev); secret_init(&c); @@ -352,14 +357,31 @@ loop: tcp_timer_handler(&c, ref); break; case EPOLL_TYPE_UDP_LISTEN: - udp_listen_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) { + udp_vu_listen_sock_handler(&c, ref, eventmask, + &now); + } else { + udp_buf_listen_sock_handler(&c, ref, eventmask, + &now); + } break; case EPOLL_TYPE_UDP_REPLY: - udp_reply_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) + udp_vu_reply_sock_handler(&c, ref, eventmask, + &now); + else + udp_buf_reply_sock_handler(&c, ref, eventmask, + &now); break; case EPOLL_TYPE_PING: icmp_sock_handler(&c, ref); break; + case EPOLL_TYPE_VHOST_CMD: + vu_control_handler(&vdev, c.fd_tap, eventmask); + break; + case EPOLL_TYPE_VHOST_KICK: + vu_kick_cb(&vdev, ref, &now); + break; default: /* Can't happen */ ASSERT(0); diff --git a/passt.h b/passt.h index 031c9b669cc4..a98f043c7e64 100644 --- a/passt.h +++ b/passt.h @@ -25,6 +25,8 @@ union epoll_ref; #include "fwd.h" #include "tcp.h" #include "udp.h" +#include "udp_vu.h" +#include "vhost_user.h" /* Default address for our end on the tap interface. Bit 0 of byte 0 must be 0 * (unicast) and bit 1 of byte 1 must be 1 (locally administered). Otherwise @@ -94,6 +96,7 @@ struct fqdn { enum passt_modes { MODE_PASST, MODE_PASTA, + MODE_VU, }; /** @@ -227,6 +230,7 @@ struct ip6_ctx { * @no_ra: Disable router advertisements * @low_wmem: Low probed net.core.wmem_max * @low_rmem: Low probed net.core.rmem_max + * @vdev: vhost-user device */ struct ctx { enum passt_modes mode; @@ -287,6 +291,8 @@ struct ctx { int low_wmem; int low_rmem; + + struct vu_dev *vdev; }; void proto_update_l2_buf(const unsigned char *eth_d, diff --git a/pcap.c b/pcap.c index 46cc4b0d72b6..7e9c56090041 100644 --- a/pcap.c +++ b/pcap.c @@ -140,7 +140,6 @@ void pcap_multiple(const struct iovec *iov, size_t frame_parts, unsigned int n, * containing packet data to write, including L2 header * @iovcnt: Number of buffers (@iov entries) */ -/* cppcheck-suppress unusedFunction */ void pcap_iov(const struct iovec *iov, size_t iovcnt) { struct timespec now; diff --git a/tap.c b/tap.c index 41af6a6d0c85..3e1b3c13c321 100644 --- a/tap.c +++ b/tap.c @@ -58,6 +58,7 @@ #include "packet.h" #include "tap.h" #include "log.h" +#include "vhost_user.h" /* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */ static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf); @@ -78,16 +79,22 @@ void tap_send_single(const struct ctx *c, const void *data, size_t l2len) struct iovec iov[2]; size_t iovcnt = 0; - if (c->mode == MODE_PASST) { + switch (c->mode) { + case MODE_PASST: iov[iovcnt] = IOV_OF_LVALUE(vnet_len); iovcnt++; - } - - iov[iovcnt].iov_base = (void *)data; - iov[iovcnt].iov_len = l2len; - iovcnt++; + /* fall through */ + case MODE_PASTA: + iov[iovcnt].iov_base = (void *)data; + iov[iovcnt].iov_len = l2len; + iovcnt++; - tap_send_frames(c, iov, iovcnt, 1); + tap_send_frames(c, iov, iovcnt, 1); + break; + case MODE_VU: + vu_send(c->vdev, data, l2len); + break; + } } /** @@ -406,10 +413,18 @@ size_t tap_send_frames(const struct ctx *c, const struct iovec *iov, if (!nframes) return 0; - if (c->mode == MODE_PASTA) + switch (c->mode) { + case MODE_PASTA: m = tap_send_frames_pasta(c, iov, bufs_per_frame, nframes); - else + break; + case MODE_PASST: m = tap_send_frames_passt(c, iov, bufs_per_frame, nframes); + break; + case MODE_VU: + /* fall through */ + default: + ASSERT(0); + } if (m < nframes) debug("tap: failed to send %zu frames of %zu", @@ -968,7 +983,7 @@ void tap_add_packet(struct ctx *c, ssize_t l2len, char *p) * tap_sock_reset() - Handle closing or failure of connect AF_UNIX socket * @c: Execution context */ -static void tap_sock_reset(struct ctx *c) +void tap_sock_reset(struct ctx *c) { info("Client connection closed%s", c->one_off ? ", exiting" : ""); @@ -979,6 +994,8 @@ static void tap_sock_reset(struct ctx *c) epoll_ctl(c->epollfd, EPOLL_CTL_DEL, c->fd_tap, NULL); close(c->fd_tap); c->fd_tap = -1; + if (c->mode == MODE_VU) + vu_cleanup(c->vdev); } /** @@ -1196,11 +1213,17 @@ static void tap_sock_unix_init(struct ctx *c) ev.data.u64 = ref.u64; epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap_listen, &ev); - info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):"); - info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s", - c->sock_path); - info("or qrap, for earlier qemu versions:"); - info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio"); + if (c->mode == MODE_VU) { + info("You can start qemu with:"); + info(" kvm ... -chardev socket,id=chr0,path=%s -netdev vhost-user,id=netdev0,chardev=chr0 -device virtio-net,netdev=netdev0 -object memory-backend-memfd,id=memfd0,share=on,size=$RAMSIZE -numa node,memdev=memfd0\n", + c->sock_path); + } else { + info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):"); + info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s", + c->sock_path); + info("or qrap, for earlier qemu versions:"); + info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio"); + } } /** @@ -1210,8 +1233,8 @@ static void tap_sock_unix_init(struct ctx *c) */ void tap_listen_handler(struct ctx *c, uint32_t events) { - union epoll_ref ref = { .type = EPOLL_TYPE_TAP_PASST }; struct epoll_event ev = { 0 }; + union epoll_ref ref; int v = INT_MAX / 2; struct ucred ucred; socklen_t len; @@ -1251,6 +1274,10 @@ void tap_listen_handler(struct ctx *c, uint32_t events) trace("tap: failed to set SO_SNDBUF to %i", v); ref.fd = c->fd_tap; + if (c->mode == MODE_VU) + ref.type = EPOLL_TYPE_VHOST_CMD; + else + ref.type = EPOLL_TYPE_TAP_PASST; ev.events = EPOLLIN | EPOLLRDHUP; ev.data.u64 = ref.u64; epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); @@ -1312,21 +1339,52 @@ static void tap_sock_tun_init(struct ctx *c) epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); } +/** + * tap_sock_update_buf() - Set the buffer base and size for the pool of packets + * @base: Buffer base + * @size Buffer size + */ +void tap_sock_update_buf(void *base, size_t size) +{ + int i; + + pool_tap4_storage.buf = base; + pool_tap4_storage.buf_size = size; + pool_tap6_storage.buf = base; + pool_tap6_storage.buf_size = size; + + for (i = 0; i < TAP_SEQS; i++) { + tap4_l4[i].p.buf = base; + tap4_l4[i].p.buf_size = size; + tap6_l4[i].p.buf = base; + tap6_l4[i].p.buf_size = size; + } +} + /** * tap_sock_init() - Create and set up AF_UNIX socket or tuntap file descriptor * @c: Execution context */ void tap_sock_init(struct ctx *c) { - size_t sz = sizeof(pkt_buf); + size_t sz; + char *buf; int i; - pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, pkt_buf, sz); - pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, pkt_buf, sz); + if (c->mode == MODE_VU) { + buf = NULL; + sz = 0; + } else { + buf = pkt_buf; + sz = sizeof(pkt_buf); + } + + pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, buf, sz); + pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, buf, sz); for (i = 0; i < TAP_SEQS; i++) { - tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz); - tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz); + tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz); + tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz); } if (c->fd_tap != -1) { /* Passed as --fd */ @@ -1335,10 +1393,17 @@ void tap_sock_init(struct ctx *c) ASSERT(c->one_off); ref.fd = c->fd_tap; - if (c->mode == MODE_PASST) + switch (c->mode) { + case MODE_PASST: ref.type = EPOLL_TYPE_TAP_PASST; - else + break; + case MODE_PASTA: ref.type = EPOLL_TYPE_TAP_PASTA; + break; + case MODE_VU: + ref.type = EPOLL_TYPE_VHOST_CMD; + break; + } ev.events = EPOLLIN | EPOLLRDHUP; ev.data.u64 = ref.u64; diff --git a/tap.h b/tap.h index ec9e2acec460..c5447f7077eb 100644 --- a/tap.h +++ b/tap.h @@ -40,7 +40,8 @@ static inline struct iovec tap_hdr_iov(const struct ctx *c, */ static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len) { - thdr->vnet_len = htonl(l2len); + if (thdr) + thdr->vnet_len = htonl(l2len); } void tap_udp4_send(const struct ctx *c, struct in_addr src, in_port_t sport, @@ -68,6 +69,8 @@ void tap_handler_pasta(struct ctx *c, uint32_t events, void tap_handler_passt(struct ctx *c, uint32_t events, const struct timespec *now); int tap_sock_unix_open(char *sock_path); +void tap_sock_reset(struct ctx *c); +void tap_sock_update_buf(void *base, size_t size); void tap_sock_init(struct ctx *c); void tap_flush_pools(void); void tap_handler(struct ctx *c, const struct timespec *now); diff --git a/tcp.c b/tcp.c index f9fe1b9a1330..b4b8864799a8 100644 --- a/tcp.c +++ b/tcp.c @@ -304,6 +304,7 @@ #include "flow_table.h" #include "tcp_internal.h" #include "tcp_buf.h" +#include "tcp_vu.h" /* MSS rounding: see SET_MSS() */ #define MSS_DEFAULT 536 @@ -903,6 +904,7 @@ static void tcp_fill_header(struct tcphdr *th, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: The IPv4 payload length, host order */ @@ -910,7 +912,7 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, struct tap_hdr *taph, struct iphdr *iph, struct tcphdr *th, size_t dlen, const uint16_t *check, - uint32_t seq) + uint32_t seq, bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); const struct in_addr *src4 = inany_v4(&tapside->oaddr); @@ -929,7 +931,10 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, tcp_fill_header(th, conn, seq); - tcp_update_check_tcp4(iph, th); + if (no_tcp_csum) + th->check = 0; + else + tcp_update_check_tcp4(iph, th); tap_hdr_update(taph, l3len + sizeof(struct ethhdr)); @@ -945,13 +950,14 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: The IPv6 payload length, host order */ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, struct tap_hdr *taph, struct ipv6hdr *ip6h, struct tcphdr *th, - size_t dlen, uint32_t seq) + size_t dlen, uint32_t seq, bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); size_t l4len = dlen + sizeof(*th); @@ -970,7 +976,10 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, tcp_fill_header(th, conn, seq); - tcp_update_check_tcp6(ip6h, th); + if (no_tcp_csum) + th->check = 0; + else + tcp_update_check_tcp6(ip6h, th); tap_hdr_update(taph, l4len + sizeof(*ip6h) + sizeof(struct ethhdr)); @@ -984,12 +993,14 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: IP payload length, host order */ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, struct iovec *iov, size_t dlen, - const uint16_t *check, uint32_t seq) + const uint16_t *check, uint32_t seq, + bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); const struct in_addr *a4 = inany_v4(&tapside->oaddr); @@ -998,13 +1009,13 @@ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, return tcp_fill_headers4(conn, iov[TCP_IOV_TAP].iov_base, iov[TCP_IOV_IP].iov_base, iov[TCP_IOV_PAYLOAD].iov_base, dlen, - check, seq); + check, seq, no_tcp_csum); } return tcp_fill_headers6(conn, iov[TCP_IOV_TAP].iov_base, iov[TCP_IOV_IP].iov_base, iov[TCP_IOV_PAYLOAD].iov_base, dlen, - seq); + seq, no_tcp_csum); } /** @@ -1237,6 +1248,9 @@ int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, */ int tcp_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) { + if (c->mode == MODE_VU) + return tcp_vu_send_flag(c, conn, flags); + return tcp_buf_send_flag(c, conn, flags); } @@ -1630,6 +1644,9 @@ static int tcp_sock_consume(const struct tcp_tap_conn *conn, uint32_t ack_seq) */ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) { + if (c->mode == MODE_VU) + return tcp_vu_data_from_sock(c, conn); + return tcp_buf_data_from_sock(c, conn); } diff --git a/tcp_buf.c b/tcp_buf.c index 1a398461a34b..10a663bdfc26 100644 --- a/tcp_buf.c +++ b/tcp_buf.c @@ -320,7 +320,7 @@ int tcp_buf_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) return ret; } - l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq, false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (flags & DUP_ACK) { @@ -381,7 +381,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, tcp4_frame_conns[tcp4_payload_used] = conn; iov = tcp4_l2_iov[tcp4_payload_used++]; - l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq, + false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (tcp4_payload_used > TCP_FRAMES_MEM - 1) tcp_payload_flush(c); @@ -389,7 +390,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, tcp6_frame_conns[tcp6_payload_used] = conn; iov = tcp6_l2_iov[tcp6_payload_used++]; - l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq, + false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (tcp6_payload_used > TCP_FRAMES_MEM - 1) tcp_payload_flush(c); diff --git a/tcp_internal.h b/tcp_internal.h index aa8bb64f1f33..e7fe735bfcb4 100644 --- a/tcp_internal.h +++ b/tcp_internal.h @@ -91,7 +91,8 @@ void tcp_rst_do(struct ctx *c, struct tcp_tap_conn *conn); size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, struct iovec *iov, size_t dlen, - const uint16_t *check, uint32_t seq); + const uint16_t *check, uint32_t seq, + bool no_tcp_csum); int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, int force_seq, struct tcp_info *tinfo); int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, int flags, diff --git a/tcp_vu.c b/tcp_vu.c new file mode 100644 index 000000000000..e3e32d628524 --- /dev/null +++ b/tcp_vu.c @@ -0,0 +1,647 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* tcp_vu.c - TCP L2 vhost-user management functions + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#include <errno.h> +#include <stddef.h> +#include <stdint.h> + +#include <netinet/ip.h> + +#include <sys/socket.h> + +#include <linux/tcp.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "ip.h" +#include "passt.h" +#include "siphash.h" +#include "inany.h" +#include "vhost_user.h" +#include "tcp.h" +#include "pcap.h" +#include "flow.h" +#include "tcp_conn.h" +#include "flow_table.h" +#include "tcp_vu.h" +#include "tcp_internal.h" +#include "checksum.h" +#include "vu_common.h" + +/** + * struct tcp_payload_t - TCP header and data to send segments with payload + * @th: TCP header + * @data: TCP data + */ +struct tcp_payload_t { + struct tcphdr th; + uint8_t data[IP_MAX_MTU - sizeof(struct tcphdr)]; +}; + +/** + * struct tcp_flags_t - TCP header and data to send zero-length + * segments (flags) + * @th: TCP header + * @opts TCP options + */ +struct tcp_flags_t { + struct tcphdr th; + char opts[OPT_MSS_LEN + OPT_WS_LEN + 1]; +}; + +static struct iovec iov_vu[VIRTQUEUE_MAX_SIZE]; +static struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; + +/** + * tcp_vu_l2_hdrlen() - return the size of the header in level 2 frame (TCP) + * @v6: Set for IPv6 packet + * + * Return: Return the size of the header + */ +static size_t tcp_vu_l2_hdrlen(bool v6) +{ + size_t l2_hdrlen; + + l2_hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf) + sizeof(struct ethhdr) + + sizeof(struct tcphdr); + + if (v6) + l2_hdrlen += sizeof(struct ipv6hdr); + else + l2_hdrlen += sizeof(struct iphdr); + + return l2_hdrlen; +} + +/** + * tcp_vu_pcap() - Capture a single frame to pcap file (TCP) + * @c: Execution context + * @tapside: Address information for one side of the flow + * @iov: Pointer to the array of IO vectors + * @iov_used: Length of the array + * @l4len: IPv4 Payload length + */ +static void tcp_vu_pcap(const struct ctx *c, const struct flowside *tapside, + struct iovec *iov, int iov_used, size_t l4len) +{ + const struct in_addr *src = inany_v4(&tapside->oaddr); + const struct in_addr *dst = inany_v4(&tapside->eaddr); + char *base = iov[0].iov_base; + size_t size = iov[0].iov_len; + struct tcp_payload_t *bp; + uint32_t sum; + + if (!*c->pcap) + return; + + if (src && dst) { + bp = vu_payloadv4(base); + sum = proto_ipv4_header_psum(l4len, IPPROTO_TCP, + *src, *dst); + } else { + bp = vu_payloadv6(base); + sum = proto_ipv6_header_psum(l4len, IPPROTO_TCP, + &tapside->oaddr.a6, + &tapside->eaddr.a6); + } + iov[0].iov_base = &bp->th; + iov[0].iov_len = size - ((char *)iov[0].iov_base - base); + bp->th.check = 0; + bp->th.check = csum_iov(iov, iov_used, sum); + + /* set iov for pcap logging */ + iov[0].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + iov[0].iov_len = size - sizeof(struct virtio_net_hdr_mrg_rxbuf); + + pcap_iov(iov, iov_used); + + /* restore iov[0] */ + iov[0].iov_base = base; + iov[0].iov_len = size; +} + +/** + * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload) + * @c: Execution context + * @conn: Connection pointer + * @flags: TCP flags: if not set, send segment only if ACK is due + * + * Return: negative error code on connection reset, 0 otherwise + */ +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *tapside = TAPFLOW(conn); + struct virtio_net_hdr_mrg_rxbuf *vh; + struct iovec l2_iov[TCP_NUM_IOVS]; + size_t l2len, l4len, optlen; + struct iovec in_sg; + struct ethhdr *eh; + int nb_ack; + int ret; + + elem[0].out_num = 0; + elem[0].out_sg = NULL; + elem[0].in_num = 1; + elem[0].in_sg = &in_sg; + ret = vu_queue_pop(vdev, vq, &elem[0]); + if (ret < 0) + return 0; + + if (elem[0].in_num < 1) { + debug("virtio-net receive queue contains no in buffers"); + vu_queue_rewind(vq, 1); + return 0; + } + + vh = elem[0].in_sg[0].iov_base; + + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(1); + + l2_iov[TCP_IOV_TAP].iov_base = NULL; + l2_iov[TCP_IOV_TAP].iov_len = 0; + l2_iov[TCP_IOV_ETH].iov_base = (char *)elem[0].in_sg[0].iov_base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr); + + eh = l2_iov[TCP_IOV_ETH].iov_base; + + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); + + if (CONN_V4(conn)) { + struct tcp_flags_t *payload; + struct iphdr *iph; + uint32_t seq; + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + eh->h_proto = htons(ETH_P_IP); + + iph = l2_iov[TCP_IOV_IP].iov_base; + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); + + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_flags_t, opts) / 4, + .ack = 1 + }; + + seq = conn->seq_to_tap; + ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen); + if (ret <= 0) { + vu_queue_rewind(vq, 1); + return ret; + } + + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq, + true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + + l2len = l4len + sizeof(*iph) + sizeof(struct ethhdr); + } else { + struct tcp_flags_t *payload; + struct ipv6hdr *ip6h; + uint32_t seq; + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + eh->h_proto = htons(ETH_P_IPV6); + + ip6h = l2_iov[TCP_IOV_IP].iov_base; + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); + + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_flags_t, opts) / 4, + .ack = 1 + }; + + seq = conn->seq_to_tap; + ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen); + if (ret <= 0) { + vu_queue_rewind(vq, 1); + return ret; + } + + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq, + true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + + l2len = l4len + sizeof(*ip6h) + sizeof(struct ethhdr); + } + l2len += sizeof(struct virtio_net_hdr_mrg_rxbuf); + ASSERT(l2len <= elem[0].in_sg[0].iov_len); + + elem[0].in_sg[0].iov_len = l2len; + tcp_vu_pcap(c, tapside, &elem[0].in_sg[0], 1, l4len); + + vu_queue_fill(vq, &elem[0], l2len, 0); + nb_ack = 1; + + if (flags & DUP_ACK) { + struct iovec in_sg_dup; + + elem[1].out_num = 0; + elem[1].out_sg = NULL; + elem[1].in_num = 1; + elem[1].in_sg = &in_sg_dup; + ret = vu_queue_pop(vdev, vq, &elem[1]); + if (ret == 0) { + if (elem[1].in_num < 1 || elem[1].in_sg[0].iov_len < l2len) { + vu_queue_rewind(vq, 1); + } else { + memcpy(elem[1].in_sg[0].iov_base, vh, l2len); + nb_ack++; + + tcp_vu_pcap(c, tapside, &elem[1].in_sg[0], 1, + l4len); + + vu_queue_fill(vq, &elem[1], l2len, 1); + } + } + } + + vu_queue_flush(vq, nb_ack); + vu_queue_notify(vdev, vq); + + return 0; +} + +/** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user buffers + * @c: Execution context + * @conn: Connection pointer + * @v4: Set for IPv4 connections + * @fillsize: Number of bytes we can receive + * @datalen: Size of received data (output) + * + * Return: Number of iov entries used to store the data + */ +static ssize_t tcp_vu_sock_recv(struct ctx *c, + struct tcp_tap_conn *conn, bool v4, + size_t fillsize, ssize_t *data_len) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; + struct msghdr mh_sock = { 0 }; + uint16_t mss = MSS_GET(conn); + static int in_sg_count; + int s = conn->sock; + size_t l2_hdrlen; + int segment_size; + int iov_cnt; + ssize_t ret; + + l2_hdrlen = tcp_vu_l2_hdrlen(!v4); + + iov_cnt = 0; + in_sg_count = 0; + segment_size = 0; + *data_len = 0; + while (fillsize > 0 && iov_cnt < VIRTQUEUE_MAX_SIZE - 1 && + in_sg_count < ARRAY_SIZE(in_sg)) { + + elem[iov_cnt].out_num = 0; + elem[iov_cnt].out_sg = NULL; + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); + if (ret < 0) + break; + + if (elem[iov_cnt].in_num < 1) { + warn("virtio-net receive queue contains no in buffers"); + break; + } + + in_sg_count += elem[iov_cnt].in_num; + + ASSERT(elem[iov_cnt].in_num == 1); + ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen); + + if (segment_size == 0) { + iov_vu[iov_cnt + 1].iov_base = + (char *)elem[iov_cnt].in_sg[0].iov_base + l2_hdrlen; + iov_vu[iov_cnt + 1].iov_len = + elem[iov_cnt].in_sg[0].iov_len - l2_hdrlen; + } else { + iov_vu[iov_cnt + 1].iov_base = elem[iov_cnt].in_sg[0].iov_base; + iov_vu[iov_cnt + 1].iov_len = elem[iov_cnt].in_sg[0].iov_len; + } + + if (iov_vu[iov_cnt + 1].iov_len > fillsize) + iov_vu[iov_cnt + 1].iov_len = fillsize; + + segment_size += iov_vu[iov_cnt + 1].iov_len; + if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { + segment_size = 0; + } else if (segment_size >= mss) { + iov_vu[iov_cnt + 1].iov_len -= segment_size - mss; + segment_size = 0; + } + fillsize -= iov_vu[iov_cnt + 1].iov_len; + + iov_cnt++; + } + if (iov_cnt == 0) + return 0; + + mh_sock.msg_iov = iov_vu; + mh_sock.msg_iovlen = iov_cnt + 1; + + do + ret = recvmsg(s, &mh_sock, MSG_PEEK); + while (ret < 0 && errno == EINTR); + + if (ret < 0) { + vu_queue_rewind(vq, iov_cnt); + if (errno != EAGAIN && errno != EWOULDBLOCK) { + ret = -errno; + tcp_rst(c, conn); + } + return ret; + } + if (!ret) { + vu_queue_rewind(vq, iov_cnt); + + if ((conn->events & (SOCK_FIN_RCVD | TAP_FIN_SENT)) == SOCK_FIN_RCVD) { + int retf = tcp_vu_send_flag(c, conn, FIN | ACK); + if (retf) { + tcp_rst(c, conn); + return retf; + } + + conn_event(c, conn, TAP_FIN_SENT); + } + return 0; + } + + *data_len = ret; + return iov_cnt; +} + +/** + * tcp_vu_prepare() - Prepare the packet header + * @c: Execution context + * @conn: Connection pointer + * @first: Pointer to the array of IO vectors + * @data_len: Packet data length + * @check: Checksum, if already known + * + * Return: Level-4 length + */ +static size_t tcp_vu_prepare(const struct ctx *c, + struct tcp_tap_conn *conn, struct iovec *first, + size_t data_len, const uint16_t **check) +{ + const struct flowside *toside = TAPFLOW(conn); + struct iovec l2_iov[TCP_NUM_IOVS]; + char *base = first->iov_base; + struct ethhdr *eh; + size_t l4len; + + /* we guess the first iovec provided by the guest can embed + * all the headers needed by L2 frame + */ + + l2_iov[TCP_IOV_TAP].iov_base = NULL; + l2_iov[TCP_IOV_TAP].iov_len = 0; + l2_iov[TCP_IOV_ETH].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr); + + eh = l2_iov[TCP_IOV_ETH].iov_base; + + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); + + /* initialize header */ + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { + struct tcp_payload_t *payload; + struct iphdr *iph; + + ASSERT(first[0].iov_len >= sizeof(struct virtio_net_hdr_mrg_rxbuf) + + sizeof(struct ethhdr) + sizeof(struct iphdr) + + sizeof(struct tcphdr)); + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + + eh->h_proto = htons(ETH_P_IP); + + iph = l2_iov[TCP_IOV_IP].iov_base; + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_payload_t, data) / 4, + .ack = 1 + }; + + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, *check, + conn->seq_to_tap, true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + + *check = &iph->check; + } else { + struct tcp_payload_t *payload; + struct ipv6hdr *ip6h; + + ASSERT(first[0].iov_len >= sizeof(struct virtio_net_hdr_mrg_rxbuf) + + sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + + sizeof(struct tcphdr)); + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + + eh->h_proto = htons(ETH_P_IPV6); + + ip6h = l2_iov[TCP_IOV_IP].iov_base; + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); + + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_payload_t, data) / 4, + .ack = 1 + }; +; + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, NULL, + conn->seq_to_tap, true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + } + + return l4len; +} + +/** + * tcp_vu_data_from_sock() - Handle new data from socket, queue to vhost-user, + * in window + * @c: Execution context + * @conn: Connection pointer + * + * Return: Negative on connection reset, 0 otherwise + */ +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) +{ + uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap; + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *tapside = TAPFLOW(conn); + uint16_t mss = MSS_GET(conn); + size_t l2_hdrlen, fillsize; + int i, iov_cnt, iov_used; + int v4 = CONN_V4(conn); + uint32_t already_sent = 0; + const uint16_t *check; + struct iovec *first; + int segment_size; + int num_buffers; + ssize_t len; + + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { + flow_err(conn, + "Got packet, but RX virtqueue not usable yet"); + return 0; + } + + already_sent = conn->seq_to_tap - conn->seq_ack_from_tap; + + if (SEQ_LT(already_sent, 0)) { + /* RFC 761, section 2.1. */ + flow_trace(conn, "ACK sequence gap: ACK for %u, sent: %u", + conn->seq_ack_from_tap, conn->seq_to_tap); + conn->seq_to_tap = conn->seq_ack_from_tap; + already_sent = 0; + } + + if (!wnd_scaled || already_sent >= wnd_scaled) { + conn_flag(c, conn, STALLED); + conn_flag(c, conn, ACK_FROM_TAP_DUE); + return 0; + } + + /* Set up buffer descriptors we'll fill completely and partially. */ + + fillsize = wnd_scaled; + + if (peek_offset_cap) + already_sent = 0; + + iov_vu[0].iov_base = tcp_buf_discard; + iov_vu[0].iov_len = already_sent; + fillsize -= already_sent; + + /* collect the buffers from vhost-user and fill them with the + * data from the socket + */ + iov_cnt = tcp_vu_sock_recv(c, conn, v4, fillsize, &len); + if (iov_cnt <= 0) + return iov_cnt; + + len -= already_sent; + if (len <= 0) { + conn_flag(c, conn, STALLED); + vu_queue_rewind(vq, iov_cnt); + return 0; + } + + conn_flag(c, conn, ~STALLED); + + /* Likely, some new data was acked too. */ + tcp_update_seqack_wnd(c, conn, 0, NULL); + + /* initialize headers */ + l2_hdrlen = tcp_vu_l2_hdrlen(!v4); + iov_used = 0; + num_buffers = 0; + check = NULL; + segment_size = 0; + + /* iov_vu is an array of buffers and the buffer size can be + * smaller than the segment size we want to use but with + * num_buffer we can merge several virtio iov buffers in one packet + * we need only to set the packet headers in the first iov and + * num_buffer to the number of iov entries + */ + for (i = 0; i < iov_cnt && len; i++) { + + if (segment_size == 0) + first = &iov_vu[i + 1]; + + if (iov_vu[i + 1].iov_len > (size_t)len) + iov_vu[i + 1].iov_len = len; + + len -= iov_vu[i + 1].iov_len; + iov_used++; + + segment_size += iov_vu[i + 1].iov_len; + num_buffers++; + + if (segment_size >= mss || len == 0 || + i + 1 == iov_cnt || !vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { + struct virtio_net_hdr_mrg_rxbuf *vh; + size_t l4len; + + if (i + 1 == iov_cnt) + check = NULL; + + /* restore first iovec base: point to vnet header */ + first->iov_base = (char *)first->iov_base - l2_hdrlen; + first->iov_len = first->iov_len + l2_hdrlen; + + vh = first->iov_base; + + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(num_buffers); + + l4len = tcp_vu_prepare(c, conn, first, segment_size, &check); + + tcp_vu_pcap(c, tapside, first, num_buffers, l4len); + + conn->seq_to_tap += segment_size; + + segment_size = 0; + num_buffers = 0; + } + } + + /* release unused buffers */ + vu_queue_rewind(vq, iov_cnt - iov_used); + + /* send packets */ + vu_send_frame(vdev, vq, elem, &iov_vu[1], iov_used); + + conn_flag(c, conn, ACK_FROM_TAP_DUE); + + return 0; +} diff --git a/tcp_vu.h b/tcp_vu.h new file mode 100644 index 000000000000..b433c3e0d06f --- /dev/null +++ b/tcp_vu.h @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef TCP_VU_H +#define TCP_VU_H + +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags); +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn); + +#endif /*TCP_VU_H */ diff --git a/udp.c b/udp.c index 2ba00c9c20a8..f7b5b5eb6421 100644 --- a/udp.c +++ b/udp.c @@ -109,8 +109,7 @@ #include "pcap.h" #include "log.h" #include "flow_table.h" - -#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ +#include "udp_internal.h" /* "Spliced" sockets indexed by bound port (host order) */ static int udp_splice_ns [IP_VERSIONS][NUM_PORTS]; @@ -118,20 +117,8 @@ static int udp_splice_init[IP_VERSIONS][NUM_PORTS]; /* Static buffers */ -/** - * struct udp_payload_t - UDP header and data for inbound messages - * @uh: UDP header - * @data: UDP data - */ -static struct udp_payload_t { - struct udphdr uh; - char data[USHRT_MAX - sizeof(struct udphdr)]; -#ifdef __AVX2__ -} __attribute__ ((packed, aligned(32))) -#else -} __attribute__ ((packed, aligned(__alignof__(unsigned int)))) -#endif -udp_payload[UDP_MAX_FRAMES]; +/* UDP header and data for inbound messages */ +static struct udp_payload_t udp_payload[UDP_MAX_FRAMES]; /* Ethernet header for IPv4 frames */ static struct ethhdr udp4_eth_hdr; @@ -298,11 +285,13 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n, * @bp: Pointer to udp_payload_t to update * @toside: Flowside for destination side * @dlen: Length of UDP payload + * @no_udp_csum: Do not set UPD checksum * * Return: size of IPv4 payload (UDP header + data) */ -static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, - const struct flowside *toside, size_t dlen) +size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum) { const struct in_addr *src = inany_v4(&toside->oaddr); const struct in_addr *dst = inany_v4(&toside->eaddr); @@ -319,7 +308,10 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, bp->uh.source = htons(toside->oport); bp->uh.dest = htons(toside->eport); bp->uh.len = htons(l4len); - csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); + if (no_udp_csum) + bp->uh.check = 0; + else + csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); return l4len; } @@ -330,11 +322,13 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, * @bp: Pointer to udp_payload_t to update * @toside: Flowside for destination side * @dlen: Length of UDP payload + * @no_udp_csum: Do not set UPD checksum * * Return: size of IPv6 payload (UDP header + data) */ -static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, - const struct flowside *toside, size_t dlen) +size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum) { uint16_t l4len = dlen + sizeof(bp->uh); @@ -348,7 +342,16 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, bp->uh.source = htons(toside->oport); bp->uh.dest = htons(toside->eport); bp->uh.len = ip6h->payload_len; - csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, bp->data, dlen); + if (no_udp_csum) { + /* O is an invalid checksum for UDP IPv6 and dropped by + * the kernel stack, even if the checksum is disabled by virtio + * flags. We need to put any non-zero value here. + */ + bp->uh.check = 0xffff; + } else { + csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, + bp->data, dlen); + } return l4len; } @@ -358,9 +361,11 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, * @mmh: Receiving mmsghdr array * @idx: Index of the datagram to prepare * @toside: Flowside for destination side + * @no_udp_csum: Do not set UPD checksum */ -static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, - const struct flowside *toside) +static void udp_tap_prepare(const struct mmsghdr *mmh, + unsigned idx, const struct flowside *toside, + bool no_udp_csum) { struct iovec (*tap_iov)[UDP_NUM_IOVS] = &udp_l2_iov[idx]; struct udp_payload_t *bp = &udp_payload[idx]; @@ -368,13 +373,15 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, size_t l4len; if (!inany_v4(&toside->eaddr) || !inany_v4(&toside->oaddr)) { - l4len = udp_update_hdr6(&bm->ip6h, bp, toside, mmh[idx].msg_len); + l4len = udp_update_hdr6(&bm->ip6h, bp, toside, + mmh[idx].msg_len, no_udp_csum); tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip6h) + sizeof(udp6_eth_hdr)); (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp6_eth_hdr); (*tap_iov)[UDP_IOV_IP] = IOV_OF_LVALUE(bm->ip6h); } else { - l4len = udp_update_hdr4(&bm->ip4h, bp, toside, mmh[idx].msg_len); + l4len = udp_update_hdr4(&bm->ip4h, bp, toside, + mmh[idx].msg_len, no_udp_csum); tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip4h) + sizeof(udp4_eth_hdr)); (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp4_eth_hdr); @@ -447,7 +454,7 @@ static int udp_sock_recverr(int s) * * Return: Number of errors handled, or < 0 if we have an unrecoverable error */ -static int udp_sock_errs(const struct ctx *c, int s, uint32_t events) +int udp_sock_errs(const struct ctx *c, int s, uint32_t events) { unsigned n_err = 0; socklen_t errlen; @@ -524,7 +531,7 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, } /** - * udp_listen_sock_handler() - Handle new data from socket + * udp_buf_listen_sock_handler() - Handle new data from socket * @c: Execution context * @ref: epoll reference * @events: epoll events bitmap @@ -532,8 +539,8 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, * * #syscalls recvmmsg */ -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now) +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) { const socklen_t sasize = sizeof(udp_meta[0].s_in); int n, i; @@ -565,7 +572,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, udp_splice_prepare(udp_mh_recv, i); } else if (batchpif == PIF_TAP) { udp_tap_prepare(udp_mh_recv, i, - flowside_at_sidx(batchsidx)); + flowside_at_sidx(batchsidx), + false); } if (++i >= n) @@ -599,7 +607,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, } /** - * udp_reply_sock_handler() - Handle new data from flow specific socket + * udp_buf_reply_sock_handler() - Handle new data from flow specific socket * @c: Execution context * @ref: epoll reference * @events: epoll events bitmap @@ -607,8 +615,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, * * #syscalls recvmmsg */ -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now) +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) { flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); const struct flowside *toside = flowside_at_sidx(tosidx); @@ -636,7 +644,7 @@ void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, if (pif_is_socket(topif)) udp_splice_prepare(udp_mh_recv, i); else if (topif == PIF_TAP) - udp_tap_prepare(udp_mh_recv, i, toside); + udp_tap_prepare(udp_mh_recv, i, toside, false); /* Restore sockaddr length clobbered by recvmsg() */ udp_mh_recv[i].msg_hdr.msg_namelen = sizeof(udp_meta[i].s_in); } diff --git a/udp.h b/udp.h index a8e76bfe8f37..ea23fb36b637 100644 --- a/udp.h +++ b/udp.h @@ -9,10 +9,10 @@ #define UDP_TIMER_INTERVAL 1000 /* ms */ void udp_portmap_clear(void); -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now); -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now); +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); int udp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, const void *saddr, const void *daddr, const struct pool *p, int idx, const struct timespec *now); diff --git a/udp_internal.h b/udp_internal.h new file mode 100644 index 000000000000..cc80e3055423 --- /dev/null +++ b/udp_internal.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later + * Copyright (c) 2021 Red Hat GmbH + * Author: Stefano Brivio <sbrivio(a)redhat.com> + */ + +#ifndef UDP_INTERNAL_H +#define UDP_INTERNAL_H + +#include "tap.h" /* needed by udp_meta_t */ + +#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ + +/** + * struct udp_payload_t - UDP header and data for inbound messages + * @uh: UDP header + * @data: UDP data + */ +struct udp_payload_t { + struct udphdr uh; + char data[USHRT_MAX - sizeof(struct udphdr)]; +#ifdef __AVX2__ +} __attribute__ ((packed, aligned(32))); +#else +} __attribute__ ((packed, aligned(__alignof__(unsigned int)))); +#endif + +size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum); +size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum); +int udp_sock_errs(const struct ctx *c, int s, uint32_t events); +#endif /* UDP_INTERNAL_H */ diff --git a/udp_vu.c b/udp_vu.c new file mode 100644 index 000000000000..fa390dec994a --- /dev/null +++ b/udp_vu.c @@ -0,0 +1,397 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* udp_vu.c - UDP L2 vhost-user management functions + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#include <unistd.h> +#include <assert.h> +#include <net/ethernet.h> +#include <net/if.h> +#include <netinet/in.h> +#include <netinet/ip.h> +#include <netinet/udp.h> +#include <stdint.h> +#include <stddef.h> +#include <sys/uio.h> +#include <linux/virtio_net.h> + +#include "checksum.h" +#include "util.h" +#include "ip.h" +#include "siphash.h" +#include "inany.h" +#include "passt.h" +#include "pcap.h" +#include "log.h" +#include "vhost_user.h" +#include "udp_internal.h" +#include "flow.h" +#include "flow_table.h" +#include "udp_flow.h" +#include "udp_vu.h" +#include "vu_common.h" + +static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE]; +static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE]; +static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; +static int in_sg_count; + +/** + * udp_vu_l2_hdrlen() - return the size of the header in level 2 frame (UDP) + * @v6: Set for IPv6 packet + * + * Return: Return the size of the header + */ +static size_t udp_vu_l2_hdrlen(bool v6) +{ + size_t l2_hdrlen; + + l2_hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf) + sizeof(struct ethhdr) + + sizeof(struct udphdr); + + if (v6) + l2_hdrlen += sizeof(struct ipv6hdr); + else + l2_hdrlen += sizeof(struct iphdr); + + return l2_hdrlen; +} + +static int udp_vu_sock_init(int s, union sockaddr_inany *s_in) +{ + struct msghdr msg = { + .msg_name = s_in, + .msg_namelen = sizeof(union sockaddr_inany), + }; + + return recvmsg(s, &msg, MSG_PEEK | MSG_DONTWAIT); +} + +/** + * udp_vu_sock_recv() - Receive datagrams from socket into vhost-user buffers + * @c: Execution context + * @s: Socket to receive from + * @events: epoll events bitmap + * @v6: Set for IPv6 connections + * @datalen: Size of received data (output) + * + * Return: Number of iov entries used to store the datagram + */ +static int udp_vu_sock_recv(const struct ctx *c, int s, uint32_t events, + bool v6, ssize_t *data_len) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + int virtqueue_max, iov_cnt, idx, iov_used; + size_t fillsize, size, off, l2_hdrlen; + struct virtio_net_hdr_mrg_rxbuf *vh; + struct msghdr msg = { 0 }; + char *base; + + ASSERT(!c->no_udp); + + if (!(events & EPOLLIN)) + return 0; + + /* compute L2 header length */ + + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + virtqueue_max = VIRTQUEUE_MAX_SIZE; + else + virtqueue_max = 1; + + l2_hdrlen = udp_vu_l2_hdrlen(v6); + + fillsize = USHRT_MAX; + iov_cnt = 0; + in_sg_count = 0; + while (fillsize && iov_cnt < virtqueue_max && + in_sg_count < ARRAY_SIZE(in_sg)) { + int ret; + + elem[iov_cnt].out_num = 0; + elem[iov_cnt].out_sg = NULL; + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); + if (ret < 0) + break; + in_sg_count += elem[iov_cnt].in_num; + + if (elem[iov_cnt].in_num < 1) { + err("virtio-net receive queue contains no in buffers"); + vu_queue_rewind(vq, iov_cnt); + return 0; + } + ASSERT(elem[iov_cnt].in_num == 1); + ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen); + + if (iov_cnt == 0) { + base = elem[iov_cnt].in_sg[0].iov_base; + size = elem[iov_cnt].in_sg[0].iov_len; + + /* keep space for the headers */ + iov_vu[0].iov_base = base + l2_hdrlen; + iov_vu[0].iov_len = size - l2_hdrlen; + } else { + iov_vu[iov_cnt].iov_base = elem[iov_cnt].in_sg[0].iov_base; + iov_vu[iov_cnt].iov_len = elem[iov_cnt].in_sg[0].iov_len; + } + + if (iov_vu[iov_cnt].iov_len > fillsize) + iov_vu[iov_cnt].iov_len = fillsize; + + fillsize -= iov_vu[iov_cnt].iov_len; + + iov_cnt++; + } + if (iov_cnt == 0) + return 0; + + msg.msg_iov = iov_vu; + msg.msg_iovlen = iov_cnt; + + *data_len = recvmsg(s, &msg, 0); + if (*data_len < 0) { + vu_queue_rewind(vq, iov_cnt); + return 0; + } + + /* restore original values */ + iov_vu[0].iov_base = base; + iov_vu[0].iov_len = size; + + /* count the numbers of buffer filled by recvmsg() */ + idx = iov_skip_bytes(iov_vu, iov_cnt, l2_hdrlen + *data_len, + &off); + /* adjust last iov length */ + if (idx < iov_cnt) + iov_vu[idx].iov_len = off; + iov_used = idx + !!off; + + /* release unused buffers */ + vu_queue_rewind(vq, iov_cnt - iov_used); + + vh = (struct virtio_net_hdr_mrg_rxbuf *)base; + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(iov_used); + + return iov_used; +} + +/** + * udp_vu_prepare() - Prepare the packet header + * @c: Execution context + * @toside: Address information for one side of the flow + * @datalen: Packet data length + * + * Return:i Level-4 length + */ +static size_t udp_vu_prepare(const struct ctx *c, + const struct flowside *toside, ssize_t data_len) +{ + struct ethhdr *eh; + size_t l4len; + + /* ethernet header */ + eh = vu_eth(iov_vu[0].iov_base); + + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); + + /* initialize header */ + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { + struct iphdr *iph = vu_ip(iov_vu[0].iov_base); + struct udp_payload_t *bp = vu_payloadv4(iov_vu[0].iov_base); + + eh->h_proto = htons(ETH_P_IP); + + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_UDP); + + l4len = udp_update_hdr4(iph, bp, toside, data_len, true); + } else { + struct ipv6hdr *ip6h = vu_ip(iov_vu[0].iov_base); + struct udp_payload_t *bp = vu_payloadv6(iov_vu[0].iov_base); + + eh->h_proto = htons(ETH_P_IPV6); + + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_UDP); + + l4len = udp_update_hdr6(ip6h, bp, toside, data_len, true); + } + + return l4len; +} + +/** + * udp_vu_pcap() - Capture a single frame to pcap file (UDP) + * @c: Execution context + * @toside: ddress information for one side of the flow + * @l4len: IPv4 Payload length + * @iov_used: Length of the array + */ +static void udp_vu_pcap(const struct ctx *c, const struct flowside *toside, + size_t l4len, int iov_used) +{ + const struct in_addr *src4 = inany_v4(&toside->oaddr); + const struct in_addr *dst4 = inany_v4(&toside->eaddr); + char *base = iov_vu[0].iov_base; + size_t size = iov_vu[0].iov_len; + struct udp_payload_t *bp; + uint32_t sum; + + if (!*c->pcap) + return; + + if (src4 && dst4) { + bp = vu_payloadv4(base); + sum = proto_ipv4_header_psum(l4len, IPPROTO_UDP, *src4, *dst4); + } else { + bp = vu_payloadv6(base); + sum = proto_ipv6_header_psum(l4len, IPPROTO_UDP, + &toside->oaddr.a6, + &toside->eaddr.a6); + bp->uh.check = 0; /* by default, set to 0xffff */ + } + + iov_vu[0].iov_base = &bp->uh; + iov_vu[0].iov_len = size - ((char *)iov_vu[0].iov_base - base); + + bp->uh.check = csum_iov(iov_vu, iov_used, sum); + + /* set iov for pcap logging */ + iov_vu[0].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + iov_vu[0].iov_len = size - sizeof(struct virtio_net_hdr_mrg_rxbuf); + pcap_iov(iov_vu, iov_used); + + /* restore iov_vu[0] */ + iov_vu[0].iov_base = base; + iov_vu[0].iov_len = size; +} + +/** + * udp_vu_listen_sock_handler() - Handle new data from socket + * @c: Execution context + * @ref: epoll reference + * @events: epoll events bitmap + * @now: Current timestamp + */ +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *toside; + union sockaddr_inany s_in; + flow_sidx_t batchsidx; + uint8_t batchpif; + bool v6; + int i; + + if (udp_sock_errs(c, ref.fd, events) < 0) { + err("UDP: Unrecoverable error on listening socket:" + " (%s port %hu)", pif_name(ref.udp.pif), ref.udp.port); + return; + } + + if (udp_vu_sock_init(ref.fd, &s_in) < 0) + return; + + batchsidx = udp_flow_from_sock(c, ref, &s_in, now); + batchpif = pif_at_sidx(batchsidx); + + if (batchpif != PIF_TAP) { + if (flow_sidx_valid(batchsidx)) { + flow_sidx_t fromsidx = flow_sidx_opposite(batchsidx); + struct udp_flow *uflow = udp_at_sidx(batchsidx); + + flow_err(uflow, + "No support for forwarding UDP from %s to %s", + pif_name(pif_at_sidx(fromsidx)), + pif_name(batchpif)); + } else { + debug("Discarding 1 datagram without flow"); + } + + return; + } + + toside = flowside_at_sidx(batchsidx); + + v6 = !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); + + for (i = 0; i < UDP_MAX_FRAMES; i++) { + ssize_t data_len; + size_t l4len; + int iov_used; + + iov_used = udp_vu_sock_recv(c, ref.fd, events, v6, &data_len); + if (iov_used <= 0) + return; + + l4len = udp_vu_prepare(c, toside, data_len); + udp_vu_pcap(c, toside, l4len, iov_used); + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); + } +} + +/** + * udp_vu_reply_sock_handler() - Handle new data from flow specific socket + * @c: Execution context + * @ref: epoll reference + * @events: epoll events bitmap + * @now: Current timestamp + */ +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) +{ + flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); + const struct flowside *toside = flowside_at_sidx(tosidx); + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + struct udp_flow *uflow = udp_at_sidx(ref.flowside); + int from_s = uflow->s[ref.flowside.sidei]; + uint8_t topif = pif_at_sidx(tosidx); + bool v6; + int i; + + ASSERT(!c->no_udp); + ASSERT(uflow); + + if (udp_sock_errs(c, from_s, events) < 0) { + flow_err(uflow, "Unrecoverable error on reply socket"); + flow_err_details(uflow); + udp_flow_close(c, uflow); + return; + } + + if (topif != PIF_TAP) { + uint8_t frompif = pif_at_sidx(ref.flowside); + + flow_err(uflow, + "No support for forwarding UDP from %s to %s", + pif_name(frompif), pif_name(topif)); + return; + } + + v6 = !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); + + for (i = 0; i < UDP_MAX_FRAMES; i++) { + ssize_t data_len; + size_t l4len; + int iov_used; + + iov_used = udp_vu_sock_recv(c, from_s, events, v6, &data_len); + if (iov_used <= 0) + return; + flow_trace(uflow, "Received 1 datagram on reply socket"); + uflow->ts = now->tv_sec; + + l4len = udp_vu_prepare(c, toside, data_len); + udp_vu_pcap(c, toside, l4len, iov_used); + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); + } +} diff --git a/udp_vu.h b/udp_vu.h new file mode 100644 index 000000000000..ba7018d3bf01 --- /dev/null +++ b/udp_vu.h @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef UDP_VU_H +#define UDP_VU_H + +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +#endif /* UDP_VU_H */ diff --git a/vhost_user.c b/vhost_user.c index 3b38e06f268e..0f98ee7fa7c3 100644 --- a/vhost_user.c +++ b/vhost_user.c @@ -52,7 +52,6 @@ * this is part of the vhost-user backend * convention. */ -/* cppcheck-suppress unusedFunction */ void vu_print_capabilities(void) { info("{"); @@ -162,9 +161,7 @@ static void vmsg_close_fds(const struct vhost_user_msg *vmsg) */ static void vu_remove_watch(const struct vu_dev *vdev, int fd) { - /* Placeholder to add passt related code */ - (void)vdev; - (void)fd; + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_DEL, fd, NULL); } /** @@ -425,7 +422,6 @@ static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq) * * Return: 0 if the zone is in a mapped memory region, -1 otherwise */ -/* cppcheck-suppress unusedFunction */ int vu_packet_check_range(void *buf, size_t offset, size_t len, const char *start) { @@ -515,6 +511,14 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev, } } + /* As vu_packet_check_range() has no access to the number of + * memory regions, mark the end of the array with mmap_addr = 0 + */ + ASSERT(vdev->nregions < VHOST_USER_MAX_RAM_SLOTS - 1); + vdev->regions[vdev->nregions].mmap_addr = 0; + + tap_sock_update_buf(vdev->regions, 0); + return false; } @@ -643,9 +647,12 @@ static bool vu_get_vring_base_exec(struct vu_dev *vdev, */ static void vu_set_watch(const struct vu_dev *vdev, int fd) { - /* Placeholder to add passt related code */ - (void)vdev; - (void)fd; + union epoll_ref ref = { .type = EPOLL_TYPE_VHOST_KICK, .fd = fd }; + struct epoll_event ev = { 0 }; + + ev.data.u64 = ref.u64; + ev.events = EPOLLIN; + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_ADD, fd, &ev); } /** @@ -685,7 +692,6 @@ static int vu_wait_queue(const struct vu_virtq *vq) * * Return: number of bytes sent, -1 if there is an error */ -/* cppcheck-suppress unusedFunction */ int vu_send(struct vu_dev *vdev, const void *buf, size_t size) { struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; @@ -869,7 +875,6 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, * @ref: epoll reference information * @now: Current timestamp */ -/* cppcheck-suppress unusedFunction */ void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, const struct timespec *now) { @@ -1104,11 +1109,11 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev, * @c: execution context * @vdev: vhost-user device */ -/* cppcheck-suppress unusedFunction */ void vu_init(struct ctx *c, struct vu_dev *vdev) { int i; + c->vdev = vdev; vdev->context = c; for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { vdev->vq[i] = (struct vu_virtq){ @@ -1124,7 +1129,6 @@ void vu_init(struct ctx *c, struct vu_dev *vdev) * vu_cleanup() - Reset vhost-user device * @vdev: vhost-user device */ -/* cppcheck-suppress unusedFunction */ void vu_cleanup(struct vu_dev *vdev) { unsigned int i; @@ -1171,8 +1175,7 @@ void vu_cleanup(struct vu_dev *vdev) */ static void vu_sock_reset(struct vu_dev *vdev) { - /* Placeholder to add passt related code */ - (void)vdev; + tap_sock_reset(vdev->context); } static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, @@ -1200,7 +1203,6 @@ static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, * @fd: vhost-user message socket * @events: epoll events */ -/* cppcheck-suppress unusedFunction */ void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events) { struct vhost_user_msg msg = { 0 }; diff --git a/virtio.c b/virtio.c index 237395396606..31e56def2c23 100644 --- a/virtio.c +++ b/virtio.c @@ -562,7 +562,6 @@ void vu_queue_unpop(struct vu_virtq *vq) * @vq: Virtqueue * @num: Number of element to unpop */ -/* cppcheck-suppress unusedFunction */ bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num) { if (num > vq->inuse) diff --git a/vu_common.c b/vu_common.c new file mode 100644 index 000000000000..7a9caae17f42 --- /dev/null +++ b/vu_common.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * common_vu.c - vhost-user common UDP and TCP functions + */ + +#include <unistd.h> +#include <sys/uio.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "passt.h" +#include "vhost_user.h" +#include "vu_common.h" + +/** + * vu_send_frame() - Send one frame to the vhost-user interface + * @vdev: vhost-user device + * @vq: vhost-user virtqueue + * @elem: virtqueue element array to send back to the virqueue + * @iov_vu: iovec array containing the data to send + * @iov_used: Length of the array + */ +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, + struct vu_virtq_element *elem, const struct iovec *iov_vu, + int iov_used) +{ + int i; + + for (i = 0; i < iov_used; i++) + vu_queue_fill(vq, &elem[i], iov_vu[i].iov_len, i); + + vu_queue_flush(vq, iov_used); + vu_queue_notify(vdev, vq); +} diff --git a/vu_common.h b/vu_common.h new file mode 100644 index 000000000000..20950b44493c --- /dev/null +++ b/vu_common.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * vhost-user common UDP and TCP functions + */ + +#ifndef VU_COMMON_H +#define VU_COMMON_H + +static inline void *vu_eth(void *base) +{ + return ((char *)base + sizeof(struct virtio_net_hdr_mrg_rxbuf)); +} + +static inline void *vu_ip(void *base) +{ + return (struct ethhdr *)vu_eth(base) + 1; +} + +static inline void *vu_payloadv4(void *base) +{ + return (struct iphdr *)vu_ip(base) + 1; +} + +static inline void *vu_payloadv6(void *base) +{ + return (struct ipv6hdr *)vu_ip(base) + 1; +} + +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, + struct vu_virtq_element *elem, const struct iovec *iov_vu, + int iov_used); +#endif /* VU_COMMON_H */ -- 2.46.0
On Fri, Sep 13, 2024 at 06:20:34PM +0200, Laurent Vivier wrote:add virtio and vhost-user functions to connect with QEMU. $ ./passt --vhost-user and # qemu-system-x86_64 ... -m 4G \ -object memory-backend-memfd,id=memfd0,share=on,size=4G \ -numa node,memdev=memfd0 \ -chardev socket,id=chr0,path=/tmp/passt_1.socket \ -netdev vhost-user,id=netdev0,chardev=chr0 \ -device virtio-net,mac=9a:2b:2c:2d:2e:2f,netdev=netdev0 \ ... Signed-off-by: Laurent Vivier <lvivier(a)redhat.com> --- Makefile | 6 +- checksum.c | 1 - conf.c | 23 +- epoll_type.h | 4 + isolation.c | 17 +- packet.c | 11 + packet.h | 8 +- passt.1 | 10 +- passt.c | 26 +- passt.h | 6 + pcap.c | 1 - tap.c | 111 +++++++-- tap.h | 5 +- tcp.c | 31 ++- tcp_buf.c | 8 +- tcp_internal.h | 3 +- tcp_vu.c | 647 +++++++++++++++++++++++++++++++++++++++++++++++++ tcp_vu.h | 12 + udp.c | 78 +++--- udp.h | 8 +- udp_internal.h | 34 +++ udp_vu.c | 397 ++++++++++++++++++++++++++++++ udp_vu.h | 13 + vhost_user.c | 32 +-- virtio.c | 1 - vu_common.c | 36 +++ vu_common.h | 34 +++ 27 files changed, 1457 insertions(+), 106 deletions(-) create mode 100644 tcp_vu.c create mode 100644 tcp_vu.h create mode 100644 udp_internal.h create mode 100644 udp_vu.c create mode 100644 udp_vu.h create mode 100644 vu_common.c create mode 100644 vu_common.h diff --git a/Makefile b/Makefile index 0e8ed60a0da1..1e8910dda1f4 100644 --- a/Makefile +++ b/Makefile @@ -54,7 +54,8 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS) PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \ icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \ ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \ - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c + tcp_buf.c tcp_splice.c tcp_vu.c udp.c udp_flow.c udp_vu.c util.c \ + vhost_user.c virtio.c vu_common.c QRAP_SRCS = qrap.c SRCS = $(PASST_SRCS) $(QRAP_SRCS) @@ -64,7 +65,8 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \ flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \ lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \ siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \ - udp.h udp_flow.h util.h vhost_user.h virtio.h + tcp_vu.h udp.h udp_flow.h udp_internal.h udp_vu.h util.h vhost_user.h \ + virtio.h vu_common.h HEADERS = $(PASST_HEADERS) seccomp.h C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 }; diff --git a/checksum.c b/checksum.c index 006614fcbb28..aa5b7ae1cb66 100644 --- a/checksum.c +++ b/checksum.c @@ -501,7 +501,6 @@ uint16_t csum(const void *buf, size_t len, uint32_t init) * * Return: 16-bit folded, complemented checksum */ -/* cppcheck-suppress unusedFunction */ uint16_t csum_iov(const struct iovec *iov, size_t n, uint32_t init) { unsigned int i; diff --git a/conf.c b/conf.c index b27588649af3..eb8e1685713a 100644 --- a/conf.c +++ b/conf.c @@ -45,6 +45,7 @@ #include "lineread.h" #include "isolation.h" #include "log.h" +#include "vhost_user.h" /** * next_chunk - Return the next piece of a string delimited by a character @@ -769,9 +770,14 @@ static void usage(const char *name, FILE *f, int status) " default: same interface name as external one\n"); } else { fprintf(f, - " -s, --socket PATH UNIX domain socket path\n" + " -s, --socket, --socket-path PATH UNIX domain socket path\n" " default: probe free path starting from " UNIX_SOCK_PATH "\n", 1); + fprintf(f, + " --vhost-user Enable vhost-user mode\n" + " UNIX domain socket is provided by -s option\n" + " --print-capabilities print back-end capabilities in JSON format,\n" + " only meaningful for vhost-user mode\n"); } fprintf(f, @@ -1291,6 +1297,10 @@ void conf(struct ctx *c, int argc, char **argv) {"netns-only", no_argument, NULL, 20 }, {"map-host-loopback", required_argument, NULL, 21 }, {"map-guest-addr", required_argument, NULL, 22 }, + {"vhost-user", no_argument, NULL, 23 }, + /* vhost-user backend program convention */ + {"print-capabilities", no_argument, NULL, 24 }, + {"socket-path", required_argument, NULL, 's' }, { 0 }, }; const char *logname = (c->mode == MODE_PASTA) ? "pasta" : "passt"; @@ -1429,7 +1439,6 @@ void conf(struct ctx *c, int argc, char **argv) sizeof(c->ip6.ifname_out), "%s", optarg); if (ret <= 0 || ret >= (int)sizeof(c->ip6.ifname_out)) die("Invalid interface name: %s", optarg); -Unrelated change.break; case 17: if (c->mode != MODE_PASTA) @@ -1468,6 +1477,16 @@ void conf(struct ctx *c, int argc, char **argv) conf_nat(optarg, &c->ip4.map_guest_addr, &c->ip6.map_guest_addr, NULL); break; + case 23: + if (c->mode == MODE_PASTA) { + err("--vhost-user is for passt mode only"); + usage(argv[0], stdout, EXIT_SUCCESS); + } + c->mode = MODE_VU; + break; + case 24: + vu_print_capabilities(); + break; case 'd': c->debug = 1; c->quiet = 0; diff --git a/epoll_type.h b/epoll_type.h index 0ad1efa0ccec..f3ef41584757 100644 --- a/epoll_type.h +++ b/epoll_type.h @@ -36,6 +36,10 @@ enum epoll_type { EPOLL_TYPE_TAP_PASST, /* socket listening for qemu socket connections */ EPOLL_TYPE_TAP_LISTEN, + /* vhost-user command socket */ + EPOLL_TYPE_VHOST_CMD, + /* vhost-user kick event socket */ + EPOLL_TYPE_VHOST_KICK, EPOLL_NUM_TYPES, }; diff --git a/isolation.c b/isolation.c index 45fba1e68b9d..3d5fd60fde46 100644 --- a/isolation.c +++ b/isolation.c @@ -377,14 +377,21 @@ void isolate_postfork(const struct ctx *c) { struct sock_fprog prog; - prctl(PR_SET_DUMPABLE, 0); + //prctl(PR_SET_DUMPABLE, 0);Useful during testing, but probably doesn't belong in your final patch.- if (c->mode == MODE_PASTA) { - prog.len = (unsigned short)ARRAY_SIZE(filter_pasta); - prog.filter = filter_pasta; - } else { + switch (c->mode) { + case MODE_PASST: prog.len = (unsigned short)ARRAY_SIZE(filter_passt); prog.filter = filter_passt; + break; + case MODE_PASTA: + prog.len = (unsigned short)ARRAY_SIZE(filter_pasta); + prog.filter = filter_pasta; + break; + case MODE_VU: + prog.len = (unsigned short)ARRAY_SIZE(filter_vu); + prog.filter = filter_vu; + break; } if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) || diff --git a/packet.c b/packet.c index 37489961a37e..e5a78d079231 100644 --- a/packet.c +++ b/packet.c @@ -36,6 +36,17 @@ static int packet_check_range(const struct pool *p, size_t offset, size_t len, const char *start, const char *func, int line) { + if (p->buf_size == 0) { + int ret; + + ret = vu_packet_check_range((void *)p->buf, offset, len, start); + + if (ret == -1) + trace("cannot find region, %s:%i", func, line); + + return ret; + } + if (start < p->buf) { trace("packet start %p before buffer start %p, " "%s:%i", (void *)start, (void *)p->buf, func, line); diff --git a/packet.h b/packet.h index 8377dcf678bb..3f70e949c066 100644 --- a/packet.h +++ b/packet.h @@ -8,8 +8,10 @@ /** * struct pool - Generic pool of packets stored in a buffer - * @buf: Buffer storing packet descriptors - * @buf_size: Total size of buffer + * @buf: Buffer storing packet descriptors, + * a struct vu_dev_region array for passt vhost-user mode + * @buf_size: Total size of buffer, + * 0 for passt vhost-user mode * @size: Number of usable descriptors for the pool * @count: Number of used descriptors for the pool * @pkt: Descriptors: see macros below @@ -22,6 +24,8 @@ struct pool { struct iovec pkt[1]; }; +int vu_packet_check_range(void *buf, size_t offset, size_t len, + const char *start); void packet_add_do(struct pool *p, size_t len, const char *start, const char *func, int line); void *packet_get_do(const struct pool *p, const size_t idx, diff --git a/passt.1 b/passt.1 index 79d134dbe098..822714147be8 100644 --- a/passt.1 +++ b/passt.1 @@ -378,12 +378,20 @@ interface address are configured on a given host interface. .SS \fBpasst\fR-only options .TP -.BR \-s ", " \-\-socket " " \fIpath +.BR \-s ", " \-\-socket-path ", " \-\-socket " " \fIpath Path for UNIX domain socket used by \fBqemu\fR(1) or \fBqrap\fR(1) to connect to \fBpasst\fR. Default is to probe a free socket, not accepting connections, starting from \fI/tmp/passt_1.socket\fR to \fI/tmp/passt_64.socket\fR. +.TP +.BR \-\-vhost-user +Enable vhost-user. The vhost-user command socket is provided by \fB--socket\fR. + +.TP +.BR \-\-print-capabilities +Print back-end capabilities in JSON format, only meaningful for vhost-user mode. + .TP .BR \-F ", " \-\-fd " " \fIFD Pass a pre-opened, connected socket to \fBpasst\fR. Usually the socket is opened diff --git a/passt.c b/passt.c index ad6f0bc32df6..b64efeaf346c 100644 --- a/passt.c +++ b/passt.c @@ -74,6 +74,8 @@ char *epoll_type_str[] = { [EPOLL_TYPE_TAP_PASTA] = "/dev/net/tun device", [EPOLL_TYPE_TAP_PASST] = "connected qemu socket", [EPOLL_TYPE_TAP_LISTEN] = "listening qemu socket", + [EPOLL_TYPE_VHOST_CMD] = "vhost-user command socket", + [EPOLL_TYPE_VHOST_KICK] = "vhost-user kick socket", }; static_assert(ARRAY_SIZE(epoll_type_str) == EPOLL_NUM_TYPES, "epoll_type_str[] doesn't match enum epoll_type"); @@ -206,6 +208,7 @@ int main(int argc, char **argv) struct rlimit limit; struct timespec now; struct sigaction sa; + struct vu_dev vdev; clock_gettime(CLOCK_MONOTONIC, &log_start); @@ -262,6 +265,8 @@ int main(int argc, char **argv) pasta_netns_quit_init(&c); tap_sock_init(&c); + if (c.mode == MODE_VU) + vu_init(&c, &vdev);vhost-user is the "tap" interface in vhost-user mode, so I think the vu_init() could be another branch within tap_sock_init(), rather than invoked from the top level here. Feel free to update the name 'tap_sock_init' to something less inaccurate while you're at it..secret_init(&c); @@ -352,14 +357,31 @@ loop: tcp_timer_handler(&c, ref); break; case EPOLL_TYPE_UDP_LISTEN: - udp_listen_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) { + udp_vu_listen_sock_handler(&c, ref, eventmask, + &now); + } else { + udp_buf_listen_sock_handler(&c, ref, eventmask, + &now); + } break; case EPOLL_TYPE_UDP_REPLY: - udp_reply_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) + udp_vu_reply_sock_handler(&c, ref, eventmask, + &now); + else + udp_buf_reply_sock_handler(&c, ref, eventmask, + &now); break; case EPOLL_TYPE_PING: icmp_sock_handler(&c, ref); break; + case EPOLL_TYPE_VHOST_CMD: + vu_control_handler(&vdev, c.fd_tap, eventmask); + break; + case EPOLL_TYPE_VHOST_KICK: + vu_kick_cb(&vdev, ref, &now); + break; default: /* Can't happen */ ASSERT(0); diff --git a/passt.h b/passt.h index 031c9b669cc4..a98f043c7e64 100644 --- a/passt.h +++ b/passt.h @@ -25,6 +25,8 @@ union epoll_ref; #include "fwd.h" #include "tcp.h" #include "udp.h" +#include "udp_vu.h" +#include "vhost_user.h" /* Default address for our end on the tap interface. Bit 0 of byte 0 must be 0 * (unicast) and bit 1 of byte 1 must be 1 (locally administered). Otherwise @@ -94,6 +96,7 @@ struct fqdn { enum passt_modes { MODE_PASST, MODE_PASTA, + MODE_VU, }; /** @@ -227,6 +230,7 @@ struct ip6_ctx { * @no_ra: Disable router advertisements * @low_wmem: Low probed net.core.wmem_max * @low_rmem: Low probed net.core.rmem_max + * @vdev: vhost-user device */ struct ctx { enum passt_modes mode; @@ -287,6 +291,8 @@ struct ctx { int low_wmem; int low_rmem; + + struct vu_dev *vdev;At some point I'd like to split off all the tap backend related fields and put them in a struct tap_ctx or similar. Or, I guess a union for the different tap-types.}; void proto_update_l2_buf(const unsigned char *eth_d, diff --git a/pcap.c b/pcap.c index 46cc4b0d72b6..7e9c56090041 100644 --- a/pcap.c +++ b/pcap.c @@ -140,7 +140,6 @@ void pcap_multiple(const struct iovec *iov, size_t frame_parts, unsigned int n, * containing packet data to write, including L2 header * @iovcnt: Number of buffers (@iov entries) */ -/* cppcheck-suppress unusedFunction */ void pcap_iov(const struct iovec *iov, size_t iovcnt) { struct timespec now; diff --git a/tap.c b/tap.c index 41af6a6d0c85..3e1b3c13c321 100644 --- a/tap.c +++ b/tap.c @@ -58,6 +58,7 @@ #include "packet.h" #include "tap.h" #include "log.h" +#include "vhost_user.h" /* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */ static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf); @@ -78,16 +79,22 @@ void tap_send_single(const struct ctx *c, const void *data, size_t l2len) struct iovec iov[2]; size_t iovcnt = 0; - if (c->mode == MODE_PASST) { + switch (c->mode) { + case MODE_PASST: iov[iovcnt] = IOV_OF_LVALUE(vnet_len); iovcnt++; - } - - iov[iovcnt].iov_base = (void *)data; - iov[iovcnt].iov_len = l2len; - iovcnt++; + /* fall through */ + case MODE_PASTA: + iov[iovcnt].iov_base = (void *)data; + iov[iovcnt].iov_len = l2len; + iovcnt++; - tap_send_frames(c, iov, iovcnt, 1); + tap_send_frames(c, iov, iovcnt, 1); + break; + case MODE_VU: + vu_send(c->vdev, data, l2len);I'm a bit uneasy re-introducing a parallel send function for the slow path, rather than using a common tap_send_frames() interface. Any chance you can unify those sensibly? Bearing in mind that this _is_ the slow path, so if you have to copy a bunch of stuff, that's ok.+ break; + } } /** @@ -406,10 +413,18 @@ size_t tap_send_frames(const struct ctx *c, const struct iovec *iov, if (!nframes) return 0; - if (c->mode == MODE_PASTA) + switch (c->mode) { + case MODE_PASTA: m = tap_send_frames_pasta(c, iov, bufs_per_frame, nframes); - else + break; + case MODE_PASST: m = tap_send_frames_passt(c, iov, bufs_per_frame, nframes); + break; + case MODE_VU: + /* fall through */ + default: + ASSERT(0); + } if (m < nframes) debug("tap: failed to send %zu frames of %zu", @@ -968,7 +983,7 @@ void tap_add_packet(struct ctx *c, ssize_t l2len, char *p) * tap_sock_reset() - Handle closing or failure of connect AF_UNIX socket * @c: Execution context */ -static void tap_sock_reset(struct ctx *c) +void tap_sock_reset(struct ctx *c) { info("Client connection closed%s", c->one_off ? ", exiting" : ""); @@ -979,6 +994,8 @@ static void tap_sock_reset(struct ctx *c) epoll_ctl(c->epollfd, EPOLL_CTL_DEL, c->fd_tap, NULL); close(c->fd_tap); c->fd_tap = -1; + if (c->mode == MODE_VU) + vu_cleanup(c->vdev); } /** @@ -1196,11 +1213,17 @@ static void tap_sock_unix_init(struct ctx *c) ev.data.u64 = ref.u64; epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap_listen, &ev); - info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):"); - info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s", - c->sock_path); - info("or qrap, for earlier qemu versions:"); - info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio"); + if (c->mode == MODE_VU) { + info("You can start qemu with:"); + info(" kvm ... -chardev socket,id=chr0,path=%s -netdev vhost-user,id=netdev0,chardev=chr0 -device virtio-net,netdev=netdev0 -object memory-backend-memfd,id=memfd0,share=on,size=$RAMSIZE -numa node,memdev=memfd0\n", + c->sock_path); + } else { + info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):"); + info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s", + c->sock_path); + info("or qrap, for earlier qemu versions:"); + info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio"); + } } /** @@ -1210,8 +1233,8 @@ static void tap_sock_unix_init(struct ctx *c) */ void tap_listen_handler(struct ctx *c, uint32_t events) { - union epoll_ref ref = { .type = EPOLL_TYPE_TAP_PASST }; struct epoll_event ev = { 0 }; + union epoll_ref ref; int v = INT_MAX / 2; struct ucred ucred; socklen_t len; @@ -1251,6 +1274,10 @@ void tap_listen_handler(struct ctx *c, uint32_t events) trace("tap: failed to set SO_SNDBUF to %i", v); ref.fd = c->fd_tap; + if (c->mode == MODE_VU) + ref.type = EPOLL_TYPE_VHOST_CMD; + else + ref.type = EPOLL_TYPE_TAP_PASST; ev.events = EPOLLIN | EPOLLRDHUP; ev.data.u64 = ref.u64; epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); @@ -1312,21 +1339,52 @@ static void tap_sock_tun_init(struct ctx *c) epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); } +/** + * tap_sock_update_buf() - Set the buffer base and size for the pool of packets + * @base: Buffer base + * @size Buffer size + */ +void tap_sock_update_buf(void *base, size_t size) +{ + int i; + + pool_tap4_storage.buf = base; + pool_tap4_storage.buf_size = size; + pool_tap6_storage.buf = base; + pool_tap6_storage.buf_size = size; + + for (i = 0; i < TAP_SEQS; i++) { + tap4_l4[i].p.buf = base; + tap4_l4[i].p.buf_size = size; + tap6_l4[i].p.buf = base; + tap6_l4[i].p.buf_size = size; + } +} + /** * tap_sock_init() - Create and set up AF_UNIX socket or tuntap file descriptor * @c: Execution context */ void tap_sock_init(struct ctx *c) { - size_t sz = sizeof(pkt_buf); + size_t sz; + char *buf; int i; - pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, pkt_buf, sz); - pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, pkt_buf, sz); + if (c->mode == MODE_VU) { + buf = NULL; + sz = 0; + } else { + buf = pkt_buf; + sz = sizeof(pkt_buf); + } + + pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, buf, sz); + pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, buf, sz); for (i = 0; i < TAP_SEQS; i++) { - tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz); - tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz); + tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz); + tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz);Any chance you could re-use tap_sock_update_buf() for this path that's very similar?} if (c->fd_tap != -1) { /* Passed as --fd */ @@ -1335,10 +1393,17 @@ void tap_sock_init(struct ctx *c) ASSERT(c->one_off); ref.fd = c->fd_tap; - if (c->mode == MODE_PASST) + switch (c->mode) { + case MODE_PASST: ref.type = EPOLL_TYPE_TAP_PASST; - else + break; + case MODE_PASTA: ref.type = EPOLL_TYPE_TAP_PASTA; + break; + case MODE_VU: + ref.type = EPOLL_TYPE_VHOST_CMD; + break; + } ev.events = EPOLLIN | EPOLLRDHUP; ev.data.u64 = ref.u64; diff --git a/tap.h b/tap.h index ec9e2acec460..c5447f7077eb 100644 --- a/tap.h +++ b/tap.h @@ -40,7 +40,8 @@ static inline struct iovec tap_hdr_iov(const struct ctx *c, */ static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len) { - thdr->vnet_len = htonl(l2len); + if (thdr) + thdr->vnet_len = htonl(l2len); } void tap_udp4_send(const struct ctx *c, struct in_addr src, in_port_t sport, @@ -68,6 +69,8 @@ void tap_handler_pasta(struct ctx *c, uint32_t events, void tap_handler_passt(struct ctx *c, uint32_t events, const struct timespec *now); int tap_sock_unix_open(char *sock_path); +void tap_sock_reset(struct ctx *c); +void tap_sock_update_buf(void *base, size_t size); void tap_sock_init(struct ctx *c); void tap_flush_pools(void); void tap_handler(struct ctx *c, const struct timespec *now); diff --git a/tcp.c b/tcp.c index f9fe1b9a1330..b4b8864799a8 100644 --- a/tcp.c +++ b/tcp.c @@ -304,6 +304,7 @@ #include "flow_table.h" #include "tcp_internal.h" #include "tcp_buf.h" +#include "tcp_vu.h" /* MSS rounding: see SET_MSS() */ #define MSS_DEFAULT 536 @@ -903,6 +904,7 @@ static void tcp_fill_header(struct tcphdr *th, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: The IPv4 payload length, host order */ @@ -910,7 +912,7 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, struct tap_hdr *taph, struct iphdr *iph, struct tcphdr *th, size_t dlen, const uint16_t *check, - uint32_t seq) + uint32_t seq, bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); const struct in_addr *src4 = inany_v4(&tapside->oaddr); @@ -929,7 +931,10 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, tcp_fill_header(th, conn, seq); - tcp_update_check_tcp4(iph, th); + if (no_tcp_csum) + th->check = 0; + else + tcp_update_check_tcp4(iph, th);It's at least theoretically possible we could have other use cases for skipping checksums thatn vhost-user, so I'd kind of like to see this change split out to simplify the huge vhost-user patch a bit.tap_hdr_update(taph, l3len + sizeof(struct ethhdr)); @@ -945,13 +950,14 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: The IPv6 payload length, host order */ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, struct tap_hdr *taph, struct ipv6hdr *ip6h, struct tcphdr *th, - size_t dlen, uint32_t seq) + size_t dlen, uint32_t seq, bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); size_t l4len = dlen + sizeof(*th); @@ -970,7 +976,10 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, tcp_fill_header(th, conn, seq); - tcp_update_check_tcp6(ip6h, th); + if (no_tcp_csum) + th->check = 0; + else + tcp_update_check_tcp6(ip6h, th); tap_hdr_update(taph, l4len + sizeof(*ip6h) + sizeof(struct ethhdr)); @@ -984,12 +993,14 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: IP payload length, host order */ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, struct iovec *iov, size_t dlen, - const uint16_t *check, uint32_t seq) + const uint16_t *check, uint32_t seq, + bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); const struct in_addr *a4 = inany_v4(&tapside->oaddr); @@ -998,13 +1009,13 @@ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, return tcp_fill_headers4(conn, iov[TCP_IOV_TAP].iov_base, iov[TCP_IOV_IP].iov_base, iov[TCP_IOV_PAYLOAD].iov_base, dlen, - check, seq); + check, seq, no_tcp_csum); } return tcp_fill_headers6(conn, iov[TCP_IOV_TAP].iov_base, iov[TCP_IOV_IP].iov_base, iov[TCP_IOV_PAYLOAD].iov_base, dlen, - seq); + seq, no_tcp_csum); } /** @@ -1237,6 +1248,9 @@ int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, */ int tcp_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) { + if (c->mode == MODE_VU) + return tcp_vu_send_flag(c, conn, flags); + return tcp_buf_send_flag(c, conn, flags); } @@ -1630,6 +1644,9 @@ static int tcp_sock_consume(const struct tcp_tap_conn *conn, uint32_t ack_seq) */ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) { + if (c->mode == MODE_VU) + return tcp_vu_data_from_sock(c, conn); + return tcp_buf_data_from_sock(c, conn); } diff --git a/tcp_buf.c b/tcp_buf.c index 1a398461a34b..10a663bdfc26 100644 --- a/tcp_buf.c +++ b/tcp_buf.c @@ -320,7 +320,7 @@ int tcp_buf_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) return ret; } - l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq, false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (flags & DUP_ACK) { @@ -381,7 +381,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, tcp4_frame_conns[tcp4_payload_used] = conn; iov = tcp4_l2_iov[tcp4_payload_used++]; - l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq, + false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (tcp4_payload_used > TCP_FRAMES_MEM - 1) tcp_payload_flush(c); @@ -389,7 +390,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, tcp6_frame_conns[tcp6_payload_used] = conn; iov = tcp6_l2_iov[tcp6_payload_used++]; - l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq, + false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (tcp6_payload_used > TCP_FRAMES_MEM - 1) tcp_payload_flush(c); diff --git a/tcp_internal.h b/tcp_internal.h index aa8bb64f1f33..e7fe735bfcb4 100644 --- a/tcp_internal.h +++ b/tcp_internal.h @@ -91,7 +91,8 @@ void tcp_rst_do(struct ctx *c, struct tcp_tap_conn *conn); size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, struct iovec *iov, size_t dlen, - const uint16_t *check, uint32_t seq); + const uint16_t *check, uint32_t seq, + bool no_tcp_csum); int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, int force_seq, struct tcp_info *tinfo); int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, int flags, diff --git a/tcp_vu.c b/tcp_vu.c new file mode 100644 index 000000000000..e3e32d628524 --- /dev/null +++ b/tcp_vu.c @@ -0,0 +1,647 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* tcp_vu.c - TCP L2 vhost-user management functions + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#include <errno.h> +#include <stddef.h> +#include <stdint.h> + +#include <netinet/ip.h> + +#include <sys/socket.h> + +#include <linux/tcp.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "ip.h" +#include "passt.h" +#include "siphash.h" +#include "inany.h" +#include "vhost_user.h" +#include "tcp.h" +#include "pcap.h" +#include "flow.h" +#include "tcp_conn.h" +#include "flow_table.h" +#include "tcp_vu.h" +#include "tcp_internal.h" +#include "checksum.h" +#include "vu_common.h" + +/** + * struct tcp_payload_t - TCP header and data to send segments with payload + * @th: TCP header + * @data: TCP data + */ +struct tcp_payload_t { + struct tcphdr th; + uint8_t data[IP_MAX_MTU - sizeof(struct tcphdr)]; +};Can you not share this with the tcp_buf.c version? Surely this one also needs to be ((packed)), yes?+ +/** + * struct tcp_flags_t - TCP header and data to send zero-length + * segments (flags) + * @th: TCP header + * @opts TCP options + */ +struct tcp_flags_t { + struct tcphdr th; + char opts[OPT_MSS_LEN + OPT_WS_LEN + 1]; +};Ditto.+static struct iovec iov_vu[VIRTQUEUE_MAX_SIZE];IIUC the code below, iov_vu[0] is always the discard buffer, the remainder corresponds to each element in elem[]. So... shouldn't this have VIRTQUEUE_MAX_SIZE+1 elements?+static struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE];I think "elem" is too brief and vague a name for a global.+ +/** + * tcp_vu_l2_hdrlen() - return the size of the header in level 2 frame (TCP) + * @v6: Set for IPv6 packet + * + * Return: Return the size of the header + */ +static size_t tcp_vu_l2_hdrlen(bool v6)I don't love the name here, since the returned size is not just of the L2 header, but the total of the L4, L3, L2 and backend specific headers.+{ + size_t l2_hdrlen; + + l2_hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf) + sizeof(struct ethhdr) + + sizeof(struct tcphdr); + + if (v6) + l2_hdrlen += sizeof(struct ipv6hdr); + else + l2_hdrlen += sizeof(struct iphdr); + + return l2_hdrlen; +} + +/** + * tcp_vu_pcap() - Capture a single frame to pcap file (TCP)Why does this need to be TCP specific? That seems odd..+ * @c: Execution context + * @tapside: Address information for one side of the flow + * @iov: Pointer to the array of IO vectors + * @iov_used: Length of the array + * @l4len: IPv4 Payload lengthIs l4len implied by the total of the lengths in the iov?+ */ +static void tcp_vu_pcap(const struct ctx *c, const struct flowside *tapside, + struct iovec *iov, int iov_used, size_t l4len) +{ + const struct in_addr *src = inany_v4(&tapside->oaddr); + const struct in_addr *dst = inany_v4(&tapside->eaddr);I think calling these 'src4' and 'dst4' would be less misleading.+ char *base = iov[0].iov_base; + size_t size = iov[0].iov_len;IIUC, this is assuming that all the headers are within the first IOV. Is that safe?+ struct tcp_payload_t *bp; + uint32_t sum; + + if (!*c->pcap) + return; + + if (src && dst) { + bp = vu_payloadv4(base); + sum = proto_ipv4_header_psum(l4len, IPPROTO_TCP, + *src, *dst); + } else { + bp = vu_payloadv6(base); + sum = proto_ipv6_header_psum(l4len, IPPROTO_TCP, + &tapside->oaddr.a6, + &tapside->eaddr.a6); + } + iov[0].iov_base = &bp->th; + iov[0].iov_len = size - ((char *)iov[0].iov_base - base); + bp->th.check = 0; + bp->th.check = csum_iov(iov, iov_used, sum);Patching the checksum in here seems messy. Couldn't we disable the skipping of the checksum if c->pcap instead?+ /* set iov for pcap logging */ + iov[0].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + iov[0].iov_len = size - sizeof(struct virtio_net_hdr_mrg_rxbuf); + + pcap_iov(iov, iov_used); + + /* restore iov[0] */ + iov[0].iov_base = base; + iov[0].iov_len = size; +} + +/** + * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload) + * @c: Execution context + * @conn: Connection pointer + * @flags: TCP flags: if not set, send segment only if ACK is due + * + * Return: negative error code on connection reset, 0 otherwise + */ +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *tapside = TAPFLOW(conn); + struct virtio_net_hdr_mrg_rxbuf *vh; + struct iovec l2_iov[TCP_NUM_IOVS]; + size_t l2len, l4len, optlen; + struct iovec in_sg; + struct ethhdr *eh; + int nb_ack; + int ret; + + elem[0].out_num = 0; + elem[0].out_sg = NULL; + elem[0].in_num = 1; + elem[0].in_sg = &in_sg;Is there a reason to use part of the global array, rather than a local here?+ ret = vu_queue_pop(vdev, vq, &elem[0]); + if (ret < 0) + return 0; + + if (elem[0].in_num < 1) { + debug("virtio-net receive queue contains no in buffers"); + vu_queue_rewind(vq, 1); + return 0; + } + + vh = elem[0].in_sg[0].iov_base; + + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(1); + + l2_iov[TCP_IOV_TAP].iov_base = NULL; + l2_iov[TCP_IOV_TAP].iov_len = 0;So.. to me it would seem logical to use TCP_IOV_TAP to cover the virtio_net_hdr_msg_rxbuf. Is there a reason not to?+ l2_iov[TCP_IOV_ETH].iov_base = (char *)elem[0].in_sg[0].iov_base + sizeof(struct virtio_net_hdr_mrg_rxbuf);You could use vu_eth() here, no?+ l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr); + + eh = l2_iov[TCP_IOV_ETH].iov_base;I think you could do this more neatly by setting eh (with vu_eth()) first, then using IOV_OF_LVALUE(*eh).+ + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source));I wonder if it would make sense just to have a single mempcy() of tcp[46]_eth_src, since it's sitting around anyway.+ + if (CONN_V4(conn)) { + struct tcp_flags_t *payload; + struct iphdr *iph; + uint32_t seq; + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + eh->h_proto = htons(ETH_P_IP); + + iph = l2_iov[TCP_IOV_IP].iov_base; + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP);Likewise I think you can make this neater by setting iph first, then the iov using IOV_OF_LVALUE(*iph). Again, this is assuming that all the headers - and in this case the options too - all fit in the single contiguous buffer we've pulled off the queue. Is that safe?+ + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_flags_t, opts) / 4, + .ack = 1 + };Technically this will have some redundant assignments (for all the non-specified fields) with tcp_l2_buf_fill_headers(). It'll be cache hot, so probably not a big deal, but maybe worth thinking about. > + > + seq = conn->seq_to_tap; > + ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen); > + if (ret <= 0) { > + vu_queue_rewind(vq, 1); > + return ret; > + } > + > + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq, > + true); > + /* keep the following assignment for clarity */ > + /* cppcheck-suppress unreadVariable */ > + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; > + > + l2len = l4len + sizeof(*iph) + sizeof(struct ethhdr); > + } else { > + struct tcp_flags_t *payload; > + struct ipv6hdr *ip6h; > + uint32_t seq; > + > + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + > + l2_iov[TCP_IOV_ETH].iov_len; > + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr); > + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + > + l2_iov[TCP_IOV_IP].iov_len; > + > + eh->h_proto = htons(ETH_P_IPV6); > + > + ip6h = l2_iov[TCP_IOV_IP].iov_base; > + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP);+ + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_flags_t, opts) / 4, + .ack = 1 + };> + > + seq = conn->seq_to_tap; > + ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen); > + if (ret <= 0) { > + vu_queue_rewind(vq, 1); > + return ret; > + } > + > + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq, > + true); > + /* keep the following assignment for clarity */ > + /* cppcheck-suppress unreadVariable */ > + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; > + > + l2len = l4len + sizeof(*ip6h) + sizeof(struct ethhdr); > + } > + l2len += sizeof(struct virtio_net_hdr_mrg_rxbuf); The ethhdr and l4len components could also be added in this common line.+ ASSERT(l2len <= elem[0].in_sg[0].iov_len);Hrm.. if you hit this assert, you've already clobbered memory you shouldn't have.+ elem[0].in_sg[0].iov_len = l2len; + tcp_vu_pcap(c, tapside, &elem[0].in_sg[0], 1, l4len); + + vu_queue_fill(vq, &elem[0], l2len, 0); + nb_ack = 1; + + if (flags & DUP_ACK) { + struct iovec in_sg_dup; + + elem[1].out_num = 0; + elem[1].out_sg = NULL; + elem[1].in_num = 1; + elem[1].in_sg = &in_sg_dup; + ret = vu_queue_pop(vdev, vq, &elem[1]); + if (ret == 0) { + if (elem[1].in_num < 1 || elem[1].in_sg[0].iov_len < l2len) { + vu_queue_rewind(vq, 1); + } else { + memcpy(elem[1].in_sg[0].iov_base, vh, l2len); + nb_ack++; + + tcp_vu_pcap(c, tapside, &elem[1].in_sg[0], 1, + l4len); + + vu_queue_fill(vq, &elem[1], l2len, 1); + } + } + } + + vu_queue_flush(vq, nb_ack); + vu_queue_notify(vdev, vq);Is there a reason to do this here as we queue each packet, rather than deferring to the same point we call tcp_payload_flush() in the non-VU path?+ + return 0; +} + +/** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user buffers + * @c: Execution context + * @conn: Connection pointer + * @v4: Set for IPv4 connections + * @fillsize: Number of bytes we can receive + * @datalen: Size of received data (output) + * + * Return: Number of iov entries used to store the data + */ +static ssize_t tcp_vu_sock_recv(struct ctx *c, + struct tcp_tap_conn *conn, bool v4, + size_t fillsize, ssize_t *data_len) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; + struct msghdr mh_sock = { 0 }; + uint16_t mss = MSS_GET(conn); + static int in_sg_count; + int s = conn->sock; + size_t l2_hdrlen; + int segment_size; + int iov_cnt; + ssize_t ret; + + l2_hdrlen = tcp_vu_l2_hdrlen(!v4); + + iov_cnt = 0; + in_sg_count = 0; + segment_size = 0;I'm finding it pretty hard to figure out what segment_size represents. It seems to be mostly a flag indicating if you're in the middle of a single packet or not?+ *data_len = 0; + while (fillsize > 0 && iov_cnt < VIRTQUEUE_MAX_SIZE - 1 && + in_sg_count < ARRAY_SIZE(in_sg)) { + + elem[iov_cnt].out_num = 0; + elem[iov_cnt].out_sg = NULL; + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); + if (ret < 0) + break; + + if (elem[iov_cnt].in_num < 1) { + warn("virtio-net receive queue contains no in buffers"); + break; + } + + in_sg_count += elem[iov_cnt].in_num; + + ASSERT(elem[iov_cnt].in_num == 1);This seems odd to me. If vu_queue_pop() always returns a single buffer, why does its interface seem set up to return multiple?+ ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen);It should be safe, but logically I think you only want this in the case you're putting the headers in this buffer (segment_size == 0?) > + if (segment_size == 0) { > + iov_vu[iov_cnt + 1].iov_base = > + (char *)elem[iov_cnt].in_sg[0].iov_base + l2_hdrlen; > + iov_vu[iov_cnt + 1].iov_len = > + elem[iov_cnt].in_sg[0].iov_len - l2_hdrlen; > + } else { > + iov_vu[iov_cnt + 1].iov_base = elem[iov_cnt].in_sg[0].iov_base; > + iov_vu[iov_cnt + 1].iov_len = elem[iov_cnt].in_sg[0].iov_len; > + } > + > + if (iov_vu[iov_cnt + 1].iov_len > fillsize) > + iov_vu[iov_cnt + 1].iov_len = fillsize; > + > + segment_size += iov_vu[iov_cnt + 1].iov_len; > + if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { > + segment_size = 0; > + } else if (segment_size >= mss) { > + iov_vu[iov_cnt + 1].iov_len -= segment_size - mss; > + segment_size = 0; > + } > + fillsize -= iov_vu[iov_cnt + 1].iov_len; > + > + iov_cnt++; > + } > + if (iov_cnt == 0) > + return 0; > + > + mh_sock.msg_iov = iov_vu; > + mh_sock.msg_iovlen = iov_cnt + 1; > + > + do > + ret = recvmsg(s, &mh_sock, MSG_PEEK); > + while (ret < 0 && errno == EINTR); > + > + if (ret < 0) { > + vu_queue_rewind(vq, iov_cnt); > + if (errno != EAGAIN && errno != EWOULDBLOCK) { > + ret = -errno; > + tcp_rst(c, conn); > + } > + return ret; > + } > + if (!ret) { > + vu_queue_rewind(vq, iov_cnt); > + > + if ((conn->events & (SOCK_FIN_RCVD | TAP_FIN_SENT)) == SOCK_FIN_RCVD) { > + int retf = tcp_vu_send_flag(c, conn, FIN | ACK); > + if (retf) { > + tcp_rst(c, conn); > + return retf; > + } > + > + conn_event(c, conn, TAP_FIN_SENT); > + } > + return 0; > + } > + > + *data_len = ret; > + return iov_cnt; > +} > + > +/** > + * tcp_vu_prepare() - Prepare the packet header > + * @c: Execution context > + * @conn: Connection pointer > + * @first: Pointer to the array of IO vectors > + * @data_len: Packet data length > + * @check: Checksum, if already known > + * > + * Return: Level-4 length > + */ > +static size_t tcp_vu_prepare(const struct ctx *c, > + struct tcp_tap_conn *conn, struct iovec *first, > + size_t data_len, const uint16_t **check) > +{ > + const struct flowside *toside = TAPFLOW(conn); > + struct iovec l2_iov[TCP_NUM_IOVS]; > + char *base = first->iov_base; > + struct ethhdr *eh; > + size_t l4len; > + > + /* we guess the first iovec provided by the guest can embed > + * all the headers needed by L2 frame > + */ > + > + l2_iov[TCP_IOV_TAP].iov_base = NULL; > + l2_iov[TCP_IOV_TAP].iov_len = 0; > + l2_iov[TCP_IOV_ETH].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf);+ l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr); + + eh = l2_iov[TCP_IOV_ETH].iov_base;+ + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source));> + > + /* initialize header */ > + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { > + struct tcp_payload_t *payload; > + struct iphdr *iph; > + > + ASSERT(first[0].iov_len >= sizeof(struct virtio_net_hdr_mrg_rxbuf) + > + sizeof(struct ethhdr) + sizeof(struct iphdr) + > + sizeof(struct tcphdr)); > + > + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + > + l2_iov[TCP_IOV_ETH].iov_len; > + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr); > + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + > + l2_iov[TCP_IOV_IP].iov_len; > + > + > + eh->h_proto = htons(ETH_P_IP); > + > + iph = l2_iov[TCP_IOV_IP].iov_base; > + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); > + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; > + payload->th = (struct tcphdr){ > + .doff = offsetof(struct tcp_payload_t, data) / 4, > + .ack = 1 > + }; > + > + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, *check, > + conn->seq_to_tap, true); > + /* keep the following assignment for clarity */ > + /* cppcheck-suppress unreadVariable */ > + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; > + > + *check = &iph->check; > + } else { > + struct tcp_payload_t *payload; > + struct ipv6hdr *ip6h; > + > + ASSERT(first[0].iov_len >= sizeof(struct virtio_net_hdr_mrg_rxbuf) + > + sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + > + sizeof(struct tcphdr)); > + > + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + > + l2_iov[TCP_IOV_ETH].iov_len; > + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr); > + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + > + l2_iov[TCP_IOV_IP].iov_len; > + > + > + eh->h_proto = htons(ETH_P_IPV6); > + > + ip6h = l2_iov[TCP_IOV_IP].iov_base; > + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); > + > + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; > + payload->th = (struct tcphdr){ > + .doff = offsetof(struct tcp_payload_t, data) / 4, > + .ack = 1 > + }; > +; > + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, NULL, > + conn->seq_to_tap, true); > + /* keep the following assignment for clarity */ > + /* cppcheck-suppress unreadVariable */ > + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; > + } > + > + return l4len; > +} > + > +/** > + * tcp_vu_data_from_sock() - Handle new data from socket, queue to vhost-user, > + * in window > + * @c: Execution context > + * @conn: Connection pointer > + * > + * Return: Negative on connection reset, 0 otherwise > + */ > +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) > +{ > + uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap; > + struct vu_dev *vdev = c->vdev; > + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > + const struct flowside *tapside = TAPFLOW(conn); > + uint16_t mss = MSS_GET(conn); > + size_t l2_hdrlen, fillsize; > + int i, iov_cnt, iov_used; > + int v4 = CONN_V4(conn); > + uint32_t already_sent = 0; > + const uint16_t *check; > + struct iovec *first; > + int segment_size; > + int num_buffers; > + ssize_t len; > + > + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { > + flow_err(conn, > + "Got packet, but RX virtqueue not usable yet"); > + return 0; > + } > + > + already_sent = conn->seq_to_tap - conn->seq_ack_from_tap; > + > + if (SEQ_LT(already_sent, 0)) { > + /* RFC 761, section 2.1. */ > + flow_trace(conn, "ACK sequence gap: ACK for %u, sent: %u", > + conn->seq_ack_from_tap, conn->seq_to_tap); > + conn->seq_to_tap = conn->seq_ack_from_tap; > + already_sent = 0; > + } > + > + if (!wnd_scaled || already_sent >= wnd_scaled) { > + conn_flag(c, conn, STALLED); > + conn_flag(c, conn, ACK_FROM_TAP_DUE); > + return 0; > + } > + > + /* Set up buffer descriptors we'll fill completely and partially. */ > + > + fillsize = wnd_scaled; > + > + if (peek_offset_cap) > + already_sent = 0; > + > + iov_vu[0].iov_base = tcp_buf_discard; > + iov_vu[0].iov_len = already_sent; > + fillsize -= already_sent; I think this line needs to go before the peek_offset_cap check. If we have PEEK_OFFSET, we do reduce the amount we read(peek) from the socket, but that unacknowledged data still needs to count against the amount of the window we have left.+ /* collect the buffers from vhost-user and fill them with the + * data from the socket + */ + iov_cnt = tcp_vu_sock_recv(c, conn, v4, fillsize, &len); + if (iov_cnt <= 0) + return iov_cnt; + + len -= already_sent; + if (len <= 0) { + conn_flag(c, conn, STALLED); + vu_queue_rewind(vq, iov_cnt); + return 0; + } + + conn_flag(c, conn, ~STALLED); + + /* Likely, some new data was acked too. */ + tcp_update_seqack_wnd(c, conn, 0, NULL); + + /* initialize headers */ + l2_hdrlen = tcp_vu_l2_hdrlen(!v4); + iov_used = 0; + num_buffers = 0; + check = NULL; + segment_size = 0; + + /* iov_vu is an array of buffers and the buffer size can be + * smaller than the segment size we want to use but with + * num_buffer we can merge several virtio iov buffers in one packet + * we need only to set the packet headers in the first iov and + * num_buffer to the number of iov entries + */ + for (i = 0; i < iov_cnt && len; i++) { + + if (segment_size == 0) + first = &iov_vu[i + 1]; + + if (iov_vu[i + 1].iov_len > (size_t)len) + iov_vu[i + 1].iov_len = len; + + len -= iov_vu[i + 1].iov_len; + iov_used++; + + segment_size += iov_vu[i + 1].iov_len; + num_buffers++; + + if (segment_size >= mss || len == 0 || + i + 1 == iov_cnt || !vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { + struct virtio_net_hdr_mrg_rxbuf *vh; + size_t l4len; + + if (i + 1 == iov_cnt) + check = NULL; + + /* restore first iovec base: point to vnet header */ + first->iov_base = (char *)first->iov_base - l2_hdrlen; + first->iov_len = first->iov_len + l2_hdrlen; + + vh = first->iov_base; + + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(num_buffers); + + l4len = tcp_vu_prepare(c, conn, first, segment_size, &check); + + tcp_vu_pcap(c, tapside, first, num_buffers, l4len); + + conn->seq_to_tap += segment_size; + + segment_size = 0; + num_buffers = 0; + } + } + + /* release unused buffers */ + vu_queue_rewind(vq, iov_cnt - iov_used); + + /* send packets */ + vu_send_frame(vdev, vq, elem, &iov_vu[1], iov_used);I think that would be better called vu_send_frames(), since it can send multiple frames IIUC (and that matches tap_send_frames()).+ + conn_flag(c, conn, ACK_FROM_TAP_DUE); + + return 0; +} diff --git a/tcp_vu.h b/tcp_vu.h new file mode 100644 index 000000000000..b433c3e0d06f --- /dev/null +++ b/tcp_vu.h @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef TCP_VU_H +#define TCP_VU_H + +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags); +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn); + +#endif /*TCP_VU_H */ diff --git a/udp.c b/udp.c index 2ba00c9c20a8..f7b5b5eb6421 100644 --- a/udp.c +++ b/udp.c @@ -109,8 +109,7 @@ #include "pcap.h" #include "log.h" #include "flow_table.h" - -#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ +#include "udp_internal.h" /* "Spliced" sockets indexed by bound port (host order) */ static int udp_splice_ns [IP_VERSIONS][NUM_PORTS]; @@ -118,20 +117,8 @@ static int udp_splice_init[IP_VERSIONS][NUM_PORTS]; /* Static buffers */ -/** - * struct udp_payload_t - UDP header and data for inbound messages - * @uh: UDP header - * @data: UDP data - */ -static struct udp_payload_t { - struct udphdr uh; - char data[USHRT_MAX - sizeof(struct udphdr)]; -#ifdef __AVX2__ -} __attribute__ ((packed, aligned(32))) -#else -} __attribute__ ((packed, aligned(__alignof__(unsigned int)))) -#endif -udp_payload[UDP_MAX_FRAMES]; +/* UDP header and data for inbound messages */ +static struct udp_payload_t udp_payload[UDP_MAX_FRAMES]; /* Ethernet header for IPv4 frames */ static struct ethhdr udp4_eth_hdr; @@ -298,11 +285,13 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n, * @bp: Pointer to udp_payload_t to update * @toside: Flowside for destination side * @dlen: Length of UDP payload + * @no_udp_csum: Do not set UPD checksums/UPD/UDP/* * Return: size of IPv4 payload (UDP header + data) */ -static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, - const struct flowside *toside, size_t dlen) +size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum) { const struct in_addr *src = inany_v4(&toside->oaddr); const struct in_addr *dst = inany_v4(&toside->eaddr); @@ -319,7 +308,10 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, bp->uh.source = htons(toside->oport); bp->uh.dest = htons(toside->eport); bp->uh.len = htons(l4len); - csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); + if (no_udp_csum) + bp->uh.check = 0; + else + csum_udp4(&bp->uh, *src, *dst, bp->data, dlen);As with TCP, I think splitting out the checksum suppression into a preliminary patch would make things easier to follow.return l4len; } @@ -330,11 +322,13 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, * @bp: Pointer to udp_payload_t to update * @toside: Flowside for destination side * @dlen: Length of UDP payload + * @no_udp_csum: Do not set UPD checksumUPD > * > * Return: size of IPv6 payload (UDP header + data) > */ > -static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > - const struct flowside *toside, size_t dlen) > +size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen, > + bool no_udp_csum) > { > uint16_t l4len = dlen + sizeof(bp->uh); > > @@ -348,7 +342,16 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > bp->uh.source = htons(toside->oport); > bp->uh.dest = htons(toside->eport); > bp->uh.len = ip6h->payload_len; > - csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, bp->data, dlen); > + if (no_udp_csum) { > + /* O is an invalid checksum for UDP IPv6 and dropped by > + * the kernel stack, even if the checksum is disabled by virtio > + * flags. We need to put any non-zero value here. > + */ > + bp->uh.check = 0xffff; > + } else { > + csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, > + bp->data, dlen); > + } > > return l4len; > } > @@ -358,9 +361,11 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > * @mmh: Receiving mmsghdr array > * @idx: Index of the datagram to prepare > * @toside: Flowside for destination side > + * @no_udp_csum: Do not set UPD checksum > */ > -static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, > - const struct flowside *toside) > +static void udp_tap_prepare(const struct mmsghdr *mmh, > + unsigned idx, const struct flowside *toside, > + bool no_udp_csum) > { > struct iovec (*tap_iov)[UDP_NUM_IOVS] = &udp_l2_iov[idx]; > struct udp_payload_t *bp = &udp_payload[idx]; > @@ -368,13 +373,15 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, > size_t l4len; > > if (!inany_v4(&toside->eaddr) || !inany_v4(&toside->oaddr)) { > - l4len = udp_update_hdr6(&bm->ip6h, bp, toside, mmh[idx].msg_len); > + l4len = udp_update_hdr6(&bm->ip6h, bp, toside, > + mmh[idx].msg_len, no_udp_csum); > tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip6h) + > sizeof(udp6_eth_hdr)); > (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp6_eth_hdr); > (*tap_iov)[UDP_IOV_IP] = IOV_OF_LVALUE(bm->ip6h); > } else { > - l4len = udp_update_hdr4(&bm->ip4h, bp, toside, mmh[idx].msg_len); > + l4len = udp_update_hdr4(&bm->ip4h, bp, toside, > + mmh[idx].msg_len, no_udp_csum); > tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip4h) + > sizeof(udp4_eth_hdr)); > (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp4_eth_hdr); > @@ -447,7 +454,7 @@ static int udp_sock_recverr(int s) > * > * Return: Number of errors handled, or < 0 if we have an unrecoverable error > */ > -static int udp_sock_errs(const struct ctx *c, int s, uint32_t events) > +int udp_sock_errs(const struct ctx *c, int s, uint32_t events) > { > unsigned n_err = 0; > socklen_t errlen; > @@ -524,7 +531,7 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, > } > > /** > - * udp_listen_sock_handler() - Handle new data from socket > + * udp_buf_listen_sock_handler() - Handle new data from socket > * @c: Execution context > * @ref: epoll reference > * @events: epoll events bitmap > @@ -532,8 +539,8 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, > * > * #syscalls recvmmsg > */ > -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now) > +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) > { > const socklen_t sasize = sizeof(udp_meta[0].s_in); > int n, i; > @@ -565,7 +572,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > udp_splice_prepare(udp_mh_recv, i); > } else if (batchpif == PIF_TAP) { > udp_tap_prepare(udp_mh_recv, i, > - flowside_at_sidx(batchsidx)); > + flowside_at_sidx(batchsidx), > + false); > } > > if (++i >= n) > @@ -599,7 +607,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > } > > /** > - * udp_reply_sock_handler() - Handle new data from flow specific socket > + * udp_buf_reply_sock_handler() - Handle new data from flow specific socket > * @c: Execution context > * @ref: epoll reference > * @events: epoll events bitmap > @@ -607,8 +615,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > * > * #syscalls recvmmsg > */ > -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now) > +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now) > { > flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); > const struct flowside *toside = flowside_at_sidx(tosidx); > @@ -636,7 +644,7 @@ void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > if (pif_is_socket(topif)) > udp_splice_prepare(udp_mh_recv, i); > else if (topif == PIF_TAP) > - udp_tap_prepare(udp_mh_recv, i, toside); > + udp_tap_prepare(udp_mh_recv, i, toside, false); > /* Restore sockaddr length clobbered by recvmsg() */ > udp_mh_recv[i].msg_hdr.msg_namelen = sizeof(udp_meta[i].s_in); > } > diff --git a/udp.h b/udp.h > index a8e76bfe8f37..ea23fb36b637 100644 > --- a/udp.h > +++ b/udp.h > @@ -9,10 +9,10 @@ > #define UDP_TIMER_INTERVAL 1000 /* ms */ > > void udp_portmap_clear(void); > -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now); > -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > - uint32_t events, const struct timespec *now); > +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, > + uint32_t events, const struct timespec *now); > int udp_tap_handler(const struct ctx *c, uint8_t pif, > sa_family_t af, const void *saddr, const void *daddr, > const struct pool *p, int idx, const struct timespec *now); > diff --git a/udp_internal.h b/udp_internal.h > new file mode 100644 > index 000000000000..cc80e3055423 > --- /dev/null > +++ b/udp_internal.h > @@ -0,0 +1,34 @@ > +/* SPDX-License-Identifier: GPL-2.0-or-later > + * Copyright (c) 2021 Red Hat GmbH > + * Author: Stefano Brivio <sbrivio(a)redhat.com> > + */ > + > +#ifndef UDP_INTERNAL_H > +#define UDP_INTERNAL_H > + > +#include "tap.h" /* needed by udp_meta_t */ > + > +#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ > + > +/** > + * struct udp_payload_t - UDP header and data for inbound messages > + * @uh: UDP header > + * @data: UDP data > + */ > +struct udp_payload_t { > + struct udphdr uh; > + char data[USHRT_MAX - sizeof(struct udphdr)]; > +#ifdef __AVX2__ > +} __attribute__ ((packed, aligned(32))); > +#else > +} __attribute__ ((packed, aligned(__alignof__(unsigned int)))); > +#endif > + > +size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen, > + bool no_udp_csum); > +size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, > + const struct flowside *toside, size_t dlen, > + bool no_udp_csum); > +int udp_sock_errs(const struct ctx *c, int s, uint32_t events); > +#endif /* UDP_INTERNAL_H */ > diff --git a/udp_vu.c b/udp_vu.c > new file mode 100644 > index 000000000000..fa390dec994a > --- /dev/null > +++ b/udp_vu.c > @@ -0,0 +1,397 @@ > +// SPDX-License-Identifier: GPL-2.0-or-later > +/* udp_vu.c - UDP L2 vhost-user management functions > + * > + * Copyright Red Hat > + * Author: Laurent Vivier <lvivier(a)redhat.com> > + */ > + > +#include <unistd.h> > +#include <assert.h> > +#include <net/ethernet.h> > +#include <net/if.h> > +#include <netinet/in.h> > +#include <netinet/ip.h> > +#include <netinet/udp.h> > +#include <stdint.h> > +#include <stddef.h> > +#include <sys/uio.h> > +#include <linux/virtio_net.h> > + > +#include "checksum.h" > +#include "util.h" > +#include "ip.h" > +#include "siphash.h" > +#include "inany.h" > +#include "passt.h" > +#include "pcap.h" > +#include "log.h" > +#include "vhost_user.h" > +#include "udp_internal.h" > +#include "flow.h" > +#include "flow_table.h" > +#include "udp_flow.h" > +#include "udp_vu.h" > +#include "vu_common.h" > + > +static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE]; > +static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE]; > +static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; > +static int in_sg_count; > + > +/** > + * udp_vu_l2_hdrlen() - return the size of the header in level 2 frame (UDP) > + * @v6: Set for IPv6 packet > + * > + * Return: Return the size of the header > + */ > +static size_t udp_vu_l2_hdrlen(bool v6) > +{ > + size_t l2_hdrlen; > + > + l2_hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf) + sizeof(struct ethhdr) + > + sizeof(struct udphdr); > + > + if (v6) > + l2_hdrlen += sizeof(struct ipv6hdr); > + else > + l2_hdrlen += sizeof(struct iphdr); > + > + return l2_hdrlen; > +} > + > +static int udp_vu_sock_init(int s, union sockaddr_inany *s_in) > +{ > + struct msghdr msg = { > + .msg_name = s_in, > + .msg_namelen = sizeof(union sockaddr_inany), > + }; > + > + return recvmsg(s, &msg, MSG_PEEK | MSG_DONTWAIT); > +} > + > +/** > + * udp_vu_sock_recv() - Receive datagrams from socket into vhost-user buffers > + * @c: Execution context > + * @s: Socket to receive from > + * @events: epoll events bitmap > + * @v6: Set for IPv6 connections > + * @datalen: Size of received data (output) > + * > + * Return: Number of iov entries used to store the datagram > + */ > +static int udp_vu_sock_recv(const struct ctx *c, int s, uint32_t events, > + bool v6, ssize_t *data_len) > +{ > + struct vu_dev *vdev = c->vdev; > + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; > + int virtqueue_max, iov_cnt, idx, iov_used; > + size_t fillsize, size, off, l2_hdrlen; > + struct virtio_net_hdr_mrg_rxbuf *vh; > + struct msghdr msg = { 0 }; > + char *base; > + > + ASSERT(!c->no_udp); > + > + if (!(events & EPOLLIN)) > + return 0; > + > + /* compute L2 header length */ > + > + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + virtqueue_max = VIRTQUEUE_MAX_SIZE; > + else > + virtqueue_max = 1; > + > + l2_hdrlen = udp_vu_l2_hdrlen(v6); > + > + fillsize = USHRT_MAX; > + iov_cnt = 0; > + in_sg_count = 0; > + while (fillsize && iov_cnt < virtqueue_max && > + in_sg_count < ARRAY_SIZE(in_sg)) { > + int ret; > + > + elem[iov_cnt].out_num = 0; > + elem[iov_cnt].out_sg = NULL; > + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; > + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; > + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); > + if (ret < 0) > + break; > + in_sg_count += elem[iov_cnt].in_num; > + > + if (elem[iov_cnt].in_num < 1) { > + err("virtio-net receive queue contains no in buffers"); > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } > + ASSERT(elem[iov_cnt].in_num == 1);+ ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen);> + > + if (iov_cnt == 0) { > + base = elem[iov_cnt].in_sg[0].iov_base; > + size = elem[iov_cnt].in_sg[0].iov_len; > + > + /* keep space for the headers */ > + iov_vu[0].iov_base = base + l2_hdrlen; > + iov_vu[0].iov_len = size - l2_hdrlen; > + } else { > + iov_vu[iov_cnt].iov_base = elem[iov_cnt].in_sg[0].iov_base; > + iov_vu[iov_cnt].iov_len = elem[iov_cnt].in_sg[0].iov_len; > + } > + > + if (iov_vu[iov_cnt].iov_len > fillsize) > + iov_vu[iov_cnt].iov_len = fillsize; > + > + fillsize -= iov_vu[iov_cnt].iov_len; > + > + iov_cnt++; > + } > + if (iov_cnt == 0) > + return 0; > + > + msg.msg_iov = iov_vu; > + msg.msg_iovlen = iov_cnt; > + > + *data_len = recvmsg(s, &msg, 0); > + if (*data_len < 0) { > + vu_queue_rewind(vq, iov_cnt); > + return 0; > + } > + > + /* restore original values */ > + iov_vu[0].iov_base = base; > + iov_vu[0].iov_len = size; > + > + /* count the numbers of buffer filled by recvmsg() */ > + idx = iov_skip_bytes(iov_vu, iov_cnt, l2_hdrlen + *data_len, > + &off); > + /* adjust last iov length */ > + if (idx < iov_cnt) > + iov_vu[idx].iov_len = off; > + iov_used = idx + !!off; > + > + /* release unused buffers */ > + vu_queue_rewind(vq, iov_cnt - iov_used); > + > + vh = (struct virtio_net_hdr_mrg_rxbuf *)base; > + vh->hdr = VU_HEADER; > + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) > + vh->num_buffers = htole16(iov_used); IIUC in the !VIRTIO_NET_F_MRG_RXBUF, we need the guest supplied buffers to be big enough to hold an entire datagram + headers. We should probably have a warning somewhere above if that's not the case, yes? And we need to make sure we drop the packet in that case, not truncate it. > + return iov_used; > +} > + > +/** > + * udp_vu_prepare() - Prepare the packet header > + * @c: Execution context > + * @toside: Address information for one side of the flow > + * @datalen: Packet data length > + * > + * Return:i Level-4 length > + */ > +static size_t udp_vu_prepare(const struct ctx *c, > + const struct flowside *toside, ssize_t data_len) > +{ > + struct ethhdr *eh; > + size_t l4len; > + > + /* ethernet header */ > + eh = vu_eth(iov_vu[0].iov_base);+ + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source));> + > + /* initialize header */ > + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { > + struct iphdr *iph = vu_ip(iov_vu[0].iov_base); > + struct udp_payload_t *bp = vu_payloadv4(iov_vu[0].iov_base); > + > + eh->h_proto = htons(ETH_P_IP); > + > + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_UDP); > + > + l4len = udp_update_hdr4(iph, bp, toside, data_len, true); > + } else { > + struct ipv6hdr *ip6h = vu_ip(iov_vu[0].iov_base); > + struct udp_payload_t *bp = vu_payloadv6(iov_vu[0].iov_base); > + > + eh->h_proto = htons(ETH_P_IPV6); > + > + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_UDP); > + > + l4len = udp_update_hdr6(ip6h, bp, toside, data_len, true); > + } > + > + return l4len; > +} > + > +/** > + * udp_vu_pcap() - Capture a single frame to pcap file (UDP) > + * @c: Execution context > + * @toside: ddress information for one side of the flow > + * @l4len: IPv4 Payload length > + * @iov_used: Length of the array > + */ > +static void udp_vu_pcap(const struct ctx *c, const struct flowside *toside, > + size_t l4len, int iov_used) > +{ > + const struct in_addr *src4 = inany_v4(&toside->oaddr); > + const struct in_addr *dst4 = inany_v4(&toside->eaddr); > + char *base = iov_vu[0].iov_base; > + size_t size = iov_vu[0].iov_len; > + struct udp_payload_t *bp; > + uint32_t sum; > + > + if (!*c->pcap) > + return; > + > + if (src4 && dst4) { > + bp = vu_payloadv4(base); > + sum = proto_ipv4_header_psum(l4len, IPPROTO_UDP, *src4, *dst4); > + } else { > + bp = vu_payloadv6(base); > + sum = proto_ipv6_header_psum(l4len, IPPROTO_UDP, > + &toside->oaddr.a6, > + &toside->eaddr.a6); > + bp->uh.check = 0; /* by default, set to 0xffff */ > + } > + > + iov_vu[0].iov_base = &bp->uh; > + iov_vu[0].iov_len = size - ((char *)iov_vu[0].iov_base - base); > + > + bp->uh.check = csum_iov(iov_vu, iov_used, sum); Similar comments here to the TCP case.+ /* set iov for pcap logging */ + iov_vu[0].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + iov_vu[0].iov_len = size - sizeof(struct virtio_net_hdr_mrg_rxbuf); + pcap_iov(iov_vu, iov_used); + + /* restore iov_vu[0] */ + iov_vu[0].iov_base = base; + iov_vu[0].iov_len = size; +} + +/** + * udp_vu_listen_sock_handler() - Handle new data from socket + * @c: Execution context + * @ref: epoll reference + * @events: epoll events bitmap + * @now: Current timestamp + */ +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *toside; + union sockaddr_inany s_in; + flow_sidx_t batchsidx; + uint8_t batchpif; + bool v6; + int i; + + if (udp_sock_errs(c, ref.fd, events) < 0) { + err("UDP: Unrecoverable error on listening socket:" + " (%s port %hu)", pif_name(ref.udp.pif), ref.udp.port); + return; + } + + if (udp_vu_sock_init(ref.fd, &s_in) < 0) + return;Hrm, it would be nice if we could avoid this additional MSG_PEEK just to initialise the batch. In fact, I think this has to change somehow. In the loop below, you're assuming that everything belongs to the same flow, taken from this first packet. For a listening socket that might not be the case. You need to check the address on each datagram to see which flow it belongs to.+ batchsidx = udp_flow_from_sock(c, ref, &s_in, now); + batchpif = pif_at_sidx(batchsidx); + + if (batchpif != PIF_TAP) { + if (flow_sidx_valid(batchsidx)) { + flow_sidx_t fromsidx = flow_sidx_opposite(batchsidx); + struct udp_flow *uflow = udp_at_sidx(batchsidx); + + flow_err(uflow, + "No support for forwarding UDP from %s to %s", + pif_name(pif_at_sidx(fromsidx)), + pif_name(batchpif)); + } else { + debug("Discarding 1 datagram without flow");Ah.. except.. we haven't actually discarded the datagram. We've PEEKed it but never read it "for real". So we could start spinning on a flowless packet if we ever got one.+ } + + return; + } + + toside = flowside_at_sidx(batchsidx); + + v6 = !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); + + for (i = 0; i < UDP_MAX_FRAMES; i++) { + ssize_t data_len; + size_t l4len; + int iov_used; + + iov_used = udp_vu_sock_recv(c, ref.fd, events, v6, &data_len); + if (iov_used <= 0) + return;Pity we have to go packet by packet rather than using recvmmsg(). Although.. while that's the case, there's probably not any point to the "batch" stuff - that's only there so we can consolidate multiple packets together; really only useful for qemu socket, where it really can become a single big sendmsg(). So ditching that, you should be able to also avoid the MSG_PEEK, just doing the flow calculation for each packet separately. Uh.. except.. no, because you need to know if it's v4 or v6 before you can allocate the buffers for the recvmsg(). Ouch.. I'm not sure how to deal with that. There might also be a way to allow a recvmmsg() in at least some cases: you could look ahead in the vu queue to see how many buffers you can grab that are large enuogh to hold a max-size UDP packet. You could then recvmmsg() that many datagrams, one into each buffer. Still has the v4/v6 problem though.+ l4len = udp_vu_prepare(c, toside, data_len); + udp_vu_pcap(c, toside, l4len, iov_used); + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); + } +} + +/** + * udp_vu_reply_sock_handler() - Handle new data from flow specific socket + * @c: Execution context + * @ref: epoll reference + * @events: epoll events bitmap + * @now: Current timestamp + */ +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) +{ + flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); + const struct flowside *toside = flowside_at_sidx(tosidx); + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + struct udp_flow *uflow = udp_at_sidx(ref.flowside); + int from_s = uflow->s[ref.flowside.sidei]; + uint8_t topif = pif_at_sidx(tosidx); + bool v6; + int i; + + ASSERT(!c->no_udp); + ASSERT(uflow); + + if (udp_sock_errs(c, from_s, events) < 0) { + flow_err(uflow, "Unrecoverable error on reply socket"); + flow_err_details(uflow); + udp_flow_close(c, uflow); + return; + } + + if (topif != PIF_TAP) { + uint8_t frompif = pif_at_sidx(ref.flowside); + + flow_err(uflow, + "No support for forwarding UDP from %s to %s", + pif_name(frompif), pif_name(topif)); + return; + } + + v6 = !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); + + for (i = 0; i < UDP_MAX_FRAMES; i++) { + ssize_t data_len; + size_t l4len; + int iov_used; + + iov_used = udp_vu_sock_recv(c, from_s, events, v6, &data_len); + if (iov_used <= 0) + return;There's a subtle difference between the "buf" and vu logic that might bite us here. In buf we read the datagrams first, then if we can't fit them into the tap device we just discard them. It's UDP, that's fine. VU, necessarily, tries to grab buffers from the tap side before it even reads the datagrams. If it can't we'll abort here. But that means the datagrams are still queued on the socket side, which could lead to rolling epoll events. I guess there's a good chance we'll just manage to send them on the next cycle, but I wonder if for the benefit of flow control we should explicitly discard them instead.+ flow_trace(uflow, "Received 1 datagram on reply socket"); + uflow->ts = now->tv_sec; + + l4len = udp_vu_prepare(c, toside, data_len); + udp_vu_pcap(c, toside, l4len, iov_used); + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); + } +} diff --git a/udp_vu.h b/udp_vu.h new file mode 100644 index 000000000000..ba7018d3bf01 --- /dev/null +++ b/udp_vu.h @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef UDP_VU_H +#define UDP_VU_H + +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +#endif /* UDP_VU_H */ diff --git a/vhost_user.c b/vhost_user.c index 3b38e06f268e..0f98ee7fa7c3 100644 --- a/vhost_user.c +++ b/vhost_user.c @@ -52,7 +52,6 @@ * this is part of the vhost-user backend * convention. */ -/* cppcheck-suppress unusedFunction */ void vu_print_capabilities(void) { info("{"); @@ -162,9 +161,7 @@ static void vmsg_close_fds(const struct vhost_user_msg *vmsg) */ static void vu_remove_watch(const struct vu_dev *vdev, int fd) { - /* Placeholder to add passt related code */ - (void)vdev; - (void)fd; + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_DEL, fd, NULL); } /** @@ -425,7 +422,6 @@ static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq) * * Return: 0 if the zone is in a mapped memory region, -1 otherwise */ -/* cppcheck-suppress unusedFunction */ int vu_packet_check_range(void *buf, size_t offset, size_t len, const char *start) { @@ -515,6 +511,14 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev, } } + /* As vu_packet_check_range() has no access to the number of + * memory regions, mark the end of the array with mmap_addr = 0 + */ + ASSERT(vdev->nregions < VHOST_USER_MAX_RAM_SLOTS - 1); + vdev->regions[vdev->nregions].mmap_addr = 0; + + tap_sock_update_buf(vdev->regions, 0); + return false; } @@ -643,9 +647,12 @@ static bool vu_get_vring_base_exec(struct vu_dev *vdev, */ static void vu_set_watch(const struct vu_dev *vdev, int fd) { - /* Placeholder to add passt related code */ - (void)vdev; - (void)fd; + union epoll_ref ref = { .type = EPOLL_TYPE_VHOST_KICK, .fd = fd }; + struct epoll_event ev = { 0 }; + + ev.data.u64 = ref.u64; + ev.events = EPOLLIN; + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_ADD, fd, &ev); } /** @@ -685,7 +692,6 @@ static int vu_wait_queue(const struct vu_virtq *vq) * * Return: number of bytes sent, -1 if there is an error */ -/* cppcheck-suppress unusedFunction */ int vu_send(struct vu_dev *vdev, const void *buf, size_t size) { struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; @@ -869,7 +875,6 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, * @ref: epoll reference information * @now: Current timestamp */ -/* cppcheck-suppress unusedFunction */ void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, const struct timespec *now) { @@ -1104,11 +1109,11 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev, * @c: execution context * @vdev: vhost-user device */ -/* cppcheck-suppress unusedFunction */ void vu_init(struct ctx *c, struct vu_dev *vdev) { int i; + c->vdev = vdev; vdev->context = c; for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { vdev->vq[i] = (struct vu_virtq){ @@ -1124,7 +1129,6 @@ void vu_init(struct ctx *c, struct vu_dev *vdev) * vu_cleanup() - Reset vhost-user device * @vdev: vhost-user device */ -/* cppcheck-suppress unusedFunction */ void vu_cleanup(struct vu_dev *vdev) { unsigned int i; @@ -1171,8 +1175,7 @@ void vu_cleanup(struct vu_dev *vdev) */ static void vu_sock_reset(struct vu_dev *vdev) { - /* Placeholder to add passt related code */ - (void)vdev; + tap_sock_reset(vdev->context); } static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, @@ -1200,7 +1203,6 @@ static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, * @fd: vhost-user message socket * @events: epoll events */ -/* cppcheck-suppress unusedFunction */ void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events) { struct vhost_user_msg msg = { 0 }; diff --git a/virtio.c b/virtio.c index 237395396606..31e56def2c23 100644 --- a/virtio.c +++ b/virtio.c @@ -562,7 +562,6 @@ void vu_queue_unpop(struct vu_virtq *vq) * @vq: Virtqueue * @num: Number of element to unpop */ -/* cppcheck-suppress unusedFunction */ bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num) { if (num > vq->inuse) diff --git a/vu_common.c b/vu_common.c new file mode 100644 index 000000000000..7a9caae17f42 --- /dev/null +++ b/vu_common.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * common_vu.c - vhost-user common UDP and TCP functions + */ + +#include <unistd.h> +#include <sys/uio.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "passt.h" +#include "vhost_user.h" +#include "vu_common.h" + +/** + * vu_send_frame() - Send one frame to the vhost-user interfaceIs it necessarily one frame? I thought it could be multiple, depending on what num_bufs values are set in the buffers.+ * @vdev: vhost-user device + * @vq: vhost-user virtqueue + * @elem: virtqueue element array to send back to the virqueue + * @iov_vu: iovec array containing the data to send + * @iov_used: Length of the array + */ +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, + struct vu_virtq_element *elem, const struct iovec *iov_vu, + int iov_used) +{ + int i; + + for (i = 0; i < iov_used; i++) + vu_queue_fill(vq, &elem[i], iov_vu[i].iov_len, i); + + vu_queue_flush(vq, iov_used); + vu_queue_notify(vdev, vq); +} diff --git a/vu_common.h b/vu_common.h new file mode 100644 index 000000000000..20950b44493c --- /dev/null +++ b/vu_common.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * vhost-user common UDP and TCP functions + */ + +#ifndef VU_COMMON_H +#define VU_COMMON_H + +static inline void *vu_eth(void *base) +{ + return ((char *)base + sizeof(struct virtio_net_hdr_mrg_rxbuf)); +} + +static inline void *vu_ip(void *base) +{ + return (struct ethhdr *)vu_eth(base) + 1; +} + +static inline void *vu_payloadv4(void *base) +{ + return (struct iphdr *)vu_ip(base) + 1; +} + +static inline void *vu_payloadv6(void *base) +{ + return (struct ipv6hdr *)vu_ip(base) + 1; +} + +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, + struct vu_virtq_element *elem, const struct iovec *iov_vu, + int iov_used); +#endif /* VU_COMMON_H */-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson
On 16/09/2024 09:00, David Gibson wrote:I tried but it's over complicated to take in input an iovec array and to use another iovec array (provided by the guest) to send data. It's easier to have a buffer and to copy it to the iovec array buffers of the guest. Moreover we don't need to have a tap_send_frames_vu() (see vu_handle_tx()), so we will introduce it to always send an iovec array with only one entry. Thanks, Laurent--- a/tap.c +++ b/tap.c @@ -58,6 +58,7 @@ #include "packet.h" #include "tap.h" #include "log.h" +#include "vhost_user.h" /* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */ static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf); @@ -78,16 +79,22 @@ void tap_send_single(const struct ctx *c, const void *data, size_t l2len) struct iovec iov[2]; size_t iovcnt = 0; - if (c->mode == MODE_PASST) { + switch (c->mode) { + case MODE_PASST: iov[iovcnt] = IOV_OF_LVALUE(vnet_len); iovcnt++; - } - - iov[iovcnt].iov_base = (void *)data; - iov[iovcnt].iov_len = l2len; - iovcnt++; + /* fall through */ + case MODE_PASTA: + iov[iovcnt].iov_base = (void *)data; + iov[iovcnt].iov_len = l2len; + iovcnt++; - tap_send_frames(c, iov, iovcnt, 1); + tap_send_frames(c, iov, iovcnt, 1); + break; + case MODE_VU: + vu_send(c->vdev, data, l2len);I'm a bit uneasy re-introducing a parallel send function for the slow path, rather than using a common tap_send_frames() interface. Any chance you can unify those sensibly? Bearing in mind that this_is_ the slow path, so if you have to copy a bunch of stuff, that's ok.
On Tue, Sep 17, 2024 at 12:03:13PM +0200, Laurent Vivier wrote:On 16/09/2024 09:00, David Gibson wrote:Oh, ok. IIUC you're saying that both this path and the "main" path go through vu_send(), in which case this is fine. -- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibsonI tried but it's over complicated to take in input an iovec array and to use another iovec array (provided by the guest) to send data. It's easier to have a buffer and to copy it to the iovec array buffers of the guest. Moreover we don't need to have a tap_send_frames_vu() (see vu_handle_tx()), so we will introduce it to always send an iovec array with only one entry.--- a/tap.c +++ b/tap.c @@ -58,6 +58,7 @@ #include "packet.h" #include "tap.h" #include "log.h" +#include "vhost_user.h" /* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */ static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf); @@ -78,16 +79,22 @@ void tap_send_single(const struct ctx *c, const void *data, size_t l2len) struct iovec iov[2]; size_t iovcnt = 0; - if (c->mode == MODE_PASST) { + switch (c->mode) { + case MODE_PASST: iov[iovcnt] = IOV_OF_LVALUE(vnet_len); iovcnt++; - } - - iov[iovcnt].iov_base = (void *)data; - iov[iovcnt].iov_len = l2len; - iovcnt++; + /* fall through */ + case MODE_PASTA: + iov[iovcnt].iov_base = (void *)data; + iov[iovcnt].iov_len = l2len; + iovcnt++; - tap_send_frames(c, iov, iovcnt, 1); + tap_send_frames(c, iov, iovcnt, 1); + break; + case MODE_VU: + vu_send(c->vdev, data, l2len);I'm a bit uneasy re-introducing a parallel send function for the slow path, rather than using a common tap_send_frames() interface. Any chance you can unify those sensibly? Bearing in mind that this_is_ the slow path, so if you have to copy a bunch of stuff, that's ok.
Sorry for the delay, I wanted first to finish extending tests to run also functional ones (not just throughput and latency) with vhost-user, but it's taking me a bit longer than expected, so here comes the review. By the way, by mistake I let passt run in non-vhost-user mode while QEMU was configured to use it. This results in a loop: qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: vhost_backend_init failed: Protocol error qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: failed to init vhost_net for queue 0 qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: vhost_backend_init failed: Protocol error qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: failed to init vhost_net for queue 0 ... and passt says: accepted connection from PID 4807 Bad frame size from guest, resetting connection Client connection closed accepted connection from PID 4807 Bad frame size from guest, resetting connection Client connection closed ... while happily flooding system logs. I guess it should be fixed in QEMU at some point: if the vhost_net initialisation fails, I don't see the point in retrying. This is without the "reconnect" option by the way: $ qemu-system-$(uname -m) -machine accel=kvm -M accel=kvm:tcg -m 16G -cpu host -smp 6 -kernel /home/sbrivio/nf/arch/x86/boot/bzImage -initrd /home/sbrivio/passt/test/mbuto.img -nographic -serial stdio -nodefaults -append "console=ttyS0 mitigations=off apparmor=0" -chardev socket,id=chr0,path=/tmp/passt-tests-TLZU2Y/passt_in_ns/passt.socket -netdev vhost-user,id=netdev0,chardev=chr0 -device virtio-net,netdev=netdev0 -object memory-backend-memfd,id=memfd0,share=on,size=16G -numa node,memdev=memfd0 -pidfile /tmp/passt-tests-TLZU2Y/passt_in_ns/qemu.pid -device vhost-vsock-pci,guest-cid=94557 On Fri, 13 Sep 2024 18:20:34 +0200 Laurent Vivier <lvivier(a)redhat.com> wrote:add virtio and vhost-user functions to connect with QEMU. $ ./passt --vhost-user and # qemu-system-x86_64 ... -m 4G \ -object memory-backend-memfd,id=memfd0,share=on,size=4G \ -numa node,memdev=memfd0 \ -chardev socket,id=chr0,path=/tmp/passt_1.socket \ -netdev vhost-user,id=netdev0,chardev=chr0 \ -device virtio-net,mac=9a:2b:2c:2d:2e:2f,netdev=netdev0 \ ... Signed-off-by: Laurent Vivier <lvivier(a)redhat.com> --- Makefile | 6 +- checksum.c | 1 - conf.c | 23 +- epoll_type.h | 4 + isolation.c | 17 +- packet.c | 11 + packet.h | 8 +- passt.1 | 10 +- passt.c | 26 +- passt.h | 6 + pcap.c | 1 - tap.c | 111 +++++++-- tap.h | 5 +- tcp.c | 31 ++- tcp_buf.c | 8 +- tcp_internal.h | 3 +- tcp_vu.c | 647 +++++++++++++++++++++++++++++++++++++++++++++++++ tcp_vu.h | 12 + udp.c | 78 +++--- udp.h | 8 +- udp_internal.h | 34 +++ udp_vu.c | 397 ++++++++++++++++++++++++++++++ udp_vu.h | 13 + vhost_user.c | 32 +-- virtio.c | 1 - vu_common.c | 36 +++ vu_common.h | 34 +++ 27 files changed, 1457 insertions(+), 106 deletions(-) create mode 100644 tcp_vu.c create mode 100644 tcp_vu.h create mode 100644 udp_internal.h create mode 100644 udp_vu.c create mode 100644 udp_vu.h create mode 100644 vu_common.c create mode 100644 vu_common.h diff --git a/Makefile b/Makefile index 0e8ed60a0da1..1e8910dda1f4 100644 --- a/Makefile +++ b/Makefile @@ -54,7 +54,8 @@ FLAGS += -DDUAL_STACK_SOCKETS=$(DUAL_STACK_SOCKETS) PASST_SRCS = arch.c arp.c checksum.c conf.c dhcp.c dhcpv6.c flow.c fwd.c \ icmp.c igmp.c inany.c iov.c ip.c isolation.c lineread.c log.c mld.c \ ndp.c netlink.c packet.c passt.c pasta.c pcap.c pif.c tap.c tcp.c \ - tcp_buf.c tcp_splice.c udp.c udp_flow.c util.c vhost_user.c virtio.c + tcp_buf.c tcp_splice.c tcp_vu.c udp.c udp_flow.c udp_vu.c util.c \ + vhost_user.c virtio.c vu_common.c QRAP_SRCS = qrap.c SRCS = $(PASST_SRCS) $(QRAP_SRCS) @@ -64,7 +65,8 @@ PASST_HEADERS = arch.h arp.h checksum.h conf.h dhcp.h dhcpv6.h flow.h fwd.h \ flow_table.h icmp.h icmp_flow.h inany.h iov.h ip.h isolation.h \ lineread.h log.h ndp.h netlink.h packet.h passt.h pasta.h pcap.h pif.h \ siphash.h tap.h tcp.h tcp_buf.h tcp_conn.h tcp_internal.h tcp_splice.h \ - udp.h udp_flow.h util.h vhost_user.h virtio.h + tcp_vu.h udp.h udp_flow.h udp_internal.h udp_vu.h util.h vhost_user.h \ + virtio.h vu_common.h HEADERS = $(PASST_HEADERS) seccomp.h C := \#include <linux/tcp.h>\nstruct tcp_info x = { .tcpi_snd_wnd = 0 }; diff --git a/checksum.c b/checksum.c index 006614fcbb28..aa5b7ae1cb66 100644 --- a/checksum.c +++ b/checksum.c @@ -501,7 +501,6 @@ uint16_t csum(const void *buf, size_t len, uint32_t init) * * Return: 16-bit folded, complemented checksum */ -/* cppcheck-suppress unusedFunction */ uint16_t csum_iov(const struct iovec *iov, size_t n, uint32_t init) { unsigned int i; diff --git a/conf.c b/conf.c index b27588649af3..eb8e1685713a 100644 --- a/conf.c +++ b/conf.c @@ -45,6 +45,7 @@ #include "lineread.h" #include "isolation.h" #include "log.h" +#include "vhost_user.h" /** * next_chunk - Return the next piece of a string delimited by a character @@ -769,9 +770,14 @@ static void usage(const char *name, FILE *f, int status) " default: same interface name as external one\n"); } else { fprintf(f, - " -s, --socket PATH UNIX domain socket path\n" + " -s, --socket, --socket-path PATH UNIX domain socket path\n" " default: probe free path starting from " UNIX_SOCK_PATH "\n", 1); + fprintf(f, + " --vhost-user Enable vhost-user mode\n" + " UNIX domain socket is provided by -s option\n" + " --print-capabilities print back-end capabilities in JSON format,\n" + " only meaningful for vhost-user mode\n"); } fprintf(f, @@ -1291,6 +1297,10 @@ void conf(struct ctx *c, int argc, char **argv) {"netns-only", no_argument, NULL, 20 }, {"map-host-loopback", required_argument, NULL, 21 }, {"map-guest-addr", required_argument, NULL, 22 }, + {"vhost-user", no_argument, NULL, 23 }, + /* vhost-user backend program convention */ + {"print-capabilities", no_argument, NULL, 24 }, + {"socket-path", required_argument, NULL, 's' }, { 0 }, }; const char *logname = (c->mode == MODE_PASTA) ? "pasta" : "passt"; @@ -1429,7 +1439,6 @@ void conf(struct ctx *c, int argc, char **argv) sizeof(c->ip6.ifname_out), "%s", optarg); if (ret <= 0 || ret >= (int)sizeof(c->ip6.ifname_out)) die("Invalid interface name: %s", optarg); - break; case 17: if (c->mode != MODE_PASTA) @@ -1468,6 +1477,16 @@ void conf(struct ctx *c, int argc, char **argv) conf_nat(optarg, &c->ip4.map_guest_addr, &c->ip6.map_guest_addr, NULL); break; + case 23: + if (c->mode == MODE_PASTA) { + err("--vhost-user is for passt mode only"); + usage(argv[0], stdout, EXIT_SUCCESS); + } + c->mode = MODE_VU; + break; + case 24: + vu_print_capabilities();I guess you should also check if (c->mode == MODE_PASTA) for this one.+ break; case 'd': c->debug = 1; c->quiet = 0; diff --git a/epoll_type.h b/epoll_type.h index 0ad1efa0ccec..f3ef41584757 100644 --- a/epoll_type.h +++ b/epoll_type.h @@ -36,6 +36,10 @@ enum epoll_type { EPOLL_TYPE_TAP_PASST, /* socket listening for qemu socket connections */ EPOLL_TYPE_TAP_LISTEN, + /* vhost-user command socket */ + EPOLL_TYPE_VHOST_CMD, + /* vhost-user kick event socket */ + EPOLL_TYPE_VHOST_KICK, EPOLL_NUM_TYPES, }; diff --git a/isolation.c b/isolation.c index 45fba1e68b9d..3d5fd60fde46 100644 --- a/isolation.c +++ b/isolation.c @@ -377,14 +377,21 @@ void isolate_postfork(const struct ctx *c) { struct sock_fprog prog; - prctl(PR_SET_DUMPABLE, 0); + //prctl(PR_SET_DUMPABLE, 0); - if (c->mode == MODE_PASTA) { - prog.len = (unsigned short)ARRAY_SIZE(filter_pasta); - prog.filter = filter_pasta; - } else { + switch (c->mode) { + case MODE_PASST: prog.len = (unsigned short)ARRAY_SIZE(filter_passt); prog.filter = filter_passt; + break; + case MODE_PASTA: + prog.len = (unsigned short)ARRAY_SIZE(filter_pasta); + prog.filter = filter_pasta; + break; + case MODE_VU: + prog.len = (unsigned short)ARRAY_SIZE(filter_vu); + prog.filter = filter_vu; + break; } if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) || diff --git a/packet.c b/packet.c index 37489961a37e..e5a78d079231 100644 --- a/packet.c +++ b/packet.c @@ -36,6 +36,17 @@ static int packet_check_range(const struct pool *p, size_t offset, size_t len, const char *start, const char *func, int line) { + if (p->buf_size == 0) { + int ret; + + ret = vu_packet_check_range((void *)p->buf, offset, len, start); + + if (ret == -1) + trace("cannot find region, %s:%i", func, line); + + return ret; + } + if (start < p->buf) { trace("packet start %p before buffer start %p, " "%s:%i", (void *)start, (void *)p->buf, func, line); diff --git a/packet.h b/packet.h index 8377dcf678bb..3f70e949c066 100644 --- a/packet.h +++ b/packet.h @@ -8,8 +8,10 @@ /** * struct pool - Generic pool of packets stored in a buffer - * @buf: Buffer storing packet descriptors - * @buf_size: Total size of buffer + * @buf: Buffer storing packet descriptors, + * a struct vu_dev_region array for passt vhost-user mode + * @buf_size: Total size of buffer, + * 0 for passt vhost-user mode * @size: Number of usable descriptors for the pool * @count: Number of used descriptors for the pool * @pkt: Descriptors: see macros below @@ -22,6 +24,8 @@ struct pool { struct iovec pkt[1]; }; +int vu_packet_check_range(void *buf, size_t offset, size_t len, + const char *start); void packet_add_do(struct pool *p, size_t len, const char *start, const char *func, int line); void *packet_get_do(const struct pool *p, const size_t idx, diff --git a/passt.1 b/passt.1 index 79d134dbe098..822714147be8 100644 --- a/passt.1 +++ b/passt.1 @@ -378,12 +378,20 @@ interface address are configured on a given host interface. .SS \fBpasst\fR-only options .TP -.BR \-s ", " \-\-socket " " \fIpath +.BR \-s ", " \-\-socket-path ", " \-\-socket " " \fIpath Path for UNIX domain socket used by \fBqemu\fR(1) or \fBqrap\fR(1) to connect to \fBpasst\fR. Default is to probe a free socket, not accepting connections, starting from \fI/tmp/passt_1.socket\fR to \fI/tmp/passt_64.socket\fR. +.TP +.BR \-\-vhost-userI think we should introduce this option as deprecated right away, so that we can switch to vhost-user mode by default soon (checking if the hypervisor sends us a vhost-user command) without having to keep this option around. At that point, we can add --no-vhost-user instead. If it makes sense, you could copy the text from --stderr: Note that this configuration option is \fBdeprecated\fR and will be removed in a future version.+Enable vhost-user. The vhost-user command socket is provided by \fB--socket\fR. + +.TP +.BR \-\-print-capabilities +Print back-end capabilities in JSON format, only meaningful for vhost-user mode. + .TP .BR \-F ", " \-\-fd " " \fIFD Pass a pre-opened, connected socket to \fBpasst\fR. Usually the socket is opened diff --git a/passt.c b/passt.c index ad6f0bc32df6..b64efeaf346c 100644 --- a/passt.c +++ b/passt.c @@ -74,6 +74,8 @@ char *epoll_type_str[] = { [EPOLL_TYPE_TAP_PASTA] = "/dev/net/tun device", [EPOLL_TYPE_TAP_PASST] = "connected qemu socket", [EPOLL_TYPE_TAP_LISTEN] = "listening qemu socket", + [EPOLL_TYPE_VHOST_CMD] = "vhost-user command socket", + [EPOLL_TYPE_VHOST_KICK] = "vhost-user kick socket", }; static_assert(ARRAY_SIZE(epoll_type_str) == EPOLL_NUM_TYPES, "epoll_type_str[] doesn't match enum epoll_type"); @@ -206,6 +208,7 @@ int main(int argc, char **argv) struct rlimit limit; struct timespec now; struct sigaction sa; + struct vu_dev vdev; clock_gettime(CLOCK_MONOTONIC, &log_start); @@ -262,6 +265,8 @@ int main(int argc, char **argv) pasta_netns_quit_init(&c); tap_sock_init(&c); + if (c.mode == MODE_VU) + vu_init(&c, &vdev); secret_init(&c); @@ -352,14 +357,31 @@ loop: tcp_timer_handler(&c, ref); break; case EPOLL_TYPE_UDP_LISTEN: - udp_listen_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) {Eventually, we'll probably want to make passt more generic and to support multiple guests, so at that point this might become EPOLL_TYPE_UDP_VU_LISTEN if it's a socket we opened for a guest using vhost-user. Or maybe we'll have to unify the receive paths, so this will remain EPOLL_TYPE_UDP_LISTEN. Either way, _if it's more convenient for you right now_, I wouldn't see any issue in defining new EPOLL_TYPE_UDP_VU_{LISTEN,REPLY} values.+ udp_vu_listen_sock_handler(&c, ref, eventmask, + &now); + } else { + udp_buf_listen_sock_handler(&c, ref, eventmask, + &now); + } break; case EPOLL_TYPE_UDP_REPLY: - udp_reply_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) + udp_vu_reply_sock_handler(&c, ref, eventmask, + &now); + else + udp_buf_reply_sock_handler(&c, ref, eventmask, + &now); break; case EPOLL_TYPE_PING: icmp_sock_handler(&c, ref); break; + case EPOLL_TYPE_VHOST_CMD: + vu_control_handler(&vdev, c.fd_tap, eventmask); + break; + case EPOLL_TYPE_VHOST_KICK: + vu_kick_cb(&vdev, ref, &now); + break; default: /* Can't happen */ ASSERT(0); diff --git a/passt.h b/passt.h index 031c9b669cc4..a98f043c7e64 100644 --- a/passt.h +++ b/passt.h @@ -25,6 +25,8 @@ union epoll_ref; #include "fwd.h" #include "tcp.h" #include "udp.h" +#include "udp_vu.h" +#include "vhost_user.h" /* Default address for our end on the tap interface. Bit 0 of byte 0 must be 0 * (unicast) and bit 1 of byte 1 must be 1 (locally administered). Otherwise @@ -94,6 +96,7 @@ struct fqdn { enum passt_modes { MODE_PASST, MODE_PASTA, + MODE_VU, }; /** @@ -227,6 +230,7 @@ struct ip6_ctx { * @no_ra: Disable router advertisements * @low_wmem: Low probed net.core.wmem_max * @low_rmem: Low probed net.core.rmem_max + * @vdev: vhost-user device */ struct ctx { enum passt_modes mode; @@ -287,6 +291,8 @@ struct ctx { int low_wmem; int low_rmem; + + struct vu_dev *vdev; }; void proto_update_l2_buf(const unsigned char *eth_d, diff --git a/pcap.c b/pcap.c index 46cc4b0d72b6..7e9c56090041 100644 --- a/pcap.c +++ b/pcap.c @@ -140,7 +140,6 @@ void pcap_multiple(const struct iovec *iov, size_t frame_parts, unsigned int n, * containing packet data to write, including L2 header * @iovcnt: Number of buffers (@iov entries) */ -/* cppcheck-suppress unusedFunction */ void pcap_iov(const struct iovec *iov, size_t iovcnt) { struct timespec now; diff --git a/tap.c b/tap.c index 41af6a6d0c85..3e1b3c13c321 100644 --- a/tap.c +++ b/tap.c @@ -58,6 +58,7 @@ #include "packet.h" #include "tap.h" #include "log.h" +#include "vhost_user.h" /* IPv4 (plus ARP) and IPv6 message batches from tap/guest to IP handlers */ static PACKET_POOL_NOINIT(pool_tap4, TAP_MSGS, pkt_buf); @@ -78,16 +79,22 @@ void tap_send_single(const struct ctx *c, const void *data, size_t l2len) struct iovec iov[2]; size_t iovcnt = 0; - if (c->mode == MODE_PASST) { + switch (c->mode) { + case MODE_PASST: iov[iovcnt] = IOV_OF_LVALUE(vnet_len); iovcnt++; - } - - iov[iovcnt].iov_base = (void *)data; - iov[iovcnt].iov_len = l2len; - iovcnt++; + /* fall through */ + case MODE_PASTA: + iov[iovcnt].iov_base = (void *)data; + iov[iovcnt].iov_len = l2len; + iovcnt++; - tap_send_frames(c, iov, iovcnt, 1); + tap_send_frames(c, iov, iovcnt, 1); + break; + case MODE_VU: + vu_send(c->vdev, data, l2len); + break; + } } /** @@ -406,10 +413,18 @@ size_t tap_send_frames(const struct ctx *c, const struct iovec *iov, if (!nframes) return 0; - if (c->mode == MODE_PASTA) + switch (c->mode) { + case MODE_PASTA: m = tap_send_frames_pasta(c, iov, bufs_per_frame, nframes); - else + break; + case MODE_PASST: m = tap_send_frames_passt(c, iov, bufs_per_frame, nframes); + break; + case MODE_VU: + /* fall through */ + default: + ASSERT(0); + } if (m < nframes) debug("tap: failed to send %zu frames of %zu", @@ -968,7 +983,7 @@ void tap_add_packet(struct ctx *c, ssize_t l2len, char *p) * tap_sock_reset() - Handle closing or failure of connect AF_UNIX socket * @c: Execution context */ -static void tap_sock_reset(struct ctx *c) +void tap_sock_reset(struct ctx *c) { info("Client connection closed%s", c->one_off ? ", exiting" : ""); @@ -979,6 +994,8 @@ static void tap_sock_reset(struct ctx *c) epoll_ctl(c->epollfd, EPOLL_CTL_DEL, c->fd_tap, NULL); close(c->fd_tap); c->fd_tap = -1; + if (c->mode == MODE_VU) + vu_cleanup(c->vdev); } /** @@ -1196,11 +1213,17 @@ static void tap_sock_unix_init(struct ctx *c) ev.data.u64 = ref.u64; epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap_listen, &ev); - info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):"); - info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s", - c->sock_path); - info("or qrap, for earlier qemu versions:"); - info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio"); + if (c->mode == MODE_VU) { + info("You can start qemu with:"); + info(" kvm ... -chardev socket,id=chr0,path=%s -netdev vhost-user,id=netdev0,chardev=chr0 -device virtio-net,netdev=netdev0 -object memory-backend-memfd,id=memfd0,share=on,size=$RAMSIZE -numa node,memdev=memfd0\n", + c->sock_path); + } else { + info("\nYou can now start qemu (>= 7.2, with commit 13c6be96618c):"); + info(" kvm ... -device virtio-net-pci,netdev=s -netdev stream,id=s,server=off,addr.type=unix,addr.path=%s", + c->sock_path); + info("or qrap, for earlier qemu versions:"); + info(" ./qrap 5 kvm ... -net socket,fd=5 -net nic,model=virtio"); + } } /** @@ -1210,8 +1233,8 @@ static void tap_sock_unix_init(struct ctx *c) */ void tap_listen_handler(struct ctx *c, uint32_t events) { - union epoll_ref ref = { .type = EPOLL_TYPE_TAP_PASST }; struct epoll_event ev = { 0 }; + union epoll_ref ref; int v = INT_MAX / 2; struct ucred ucred; socklen_t len; @@ -1251,6 +1274,10 @@ void tap_listen_handler(struct ctx *c, uint32_t events) trace("tap: failed to set SO_SNDBUF to %i", v); ref.fd = c->fd_tap; + if (c->mode == MODE_VU) + ref.type = EPOLL_TYPE_VHOST_CMD; + else + ref.type = EPOLL_TYPE_TAP_PASST; ev.events = EPOLLIN | EPOLLRDHUP; ev.data.u64 = ref.u64; epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); @@ -1312,21 +1339,52 @@ static void tap_sock_tun_init(struct ctx *c) epoll_ctl(c->epollfd, EPOLL_CTL_ADD, c->fd_tap, &ev); } +/** + * tap_sock_update_buf() - Set the buffer base and size for the pool of packets + * @base: Buffer base + * @size Buffer size + */ +void tap_sock_update_buf(void *base, size_t size) +{ + int i; + + pool_tap4_storage.buf = base; + pool_tap4_storage.buf_size = size; + pool_tap6_storage.buf = base; + pool_tap6_storage.buf_size = size; + + for (i = 0; i < TAP_SEQS; i++) { + tap4_l4[i].p.buf = base; + tap4_l4[i].p.buf_size = size; + tap6_l4[i].p.buf = base; + tap6_l4[i].p.buf_size = size; + } +} + /** * tap_sock_init() - Create and set up AF_UNIX socket or tuntap file descriptor * @c: Execution context */ void tap_sock_init(struct ctx *c) { - size_t sz = sizeof(pkt_buf); + size_t sz; + char *buf; int i; - pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, pkt_buf, sz); - pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, pkt_buf, sz); + if (c->mode == MODE_VU) { + buf = NULL; + sz = 0; + } else { + buf = pkt_buf; + sz = sizeof(pkt_buf); + } + + pool_tap4_storage = PACKET_INIT(pool_tap4, TAP_MSGS, buf, sz); + pool_tap6_storage = PACKET_INIT(pool_tap6, TAP_MSGS, buf, sz); for (i = 0; i < TAP_SEQS; i++) { - tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz); - tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, pkt_buf, sz); + tap4_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz); + tap6_l4[i].p = PACKET_INIT(pool_l4, UIO_MAXIOV, buf, sz); } if (c->fd_tap != -1) { /* Passed as --fd */ @@ -1335,10 +1393,17 @@ void tap_sock_init(struct ctx *c) ASSERT(c->one_off); ref.fd = c->fd_tap; - if (c->mode == MODE_PASST) + switch (c->mode) { + case MODE_PASST: ref.type = EPOLL_TYPE_TAP_PASST; - else + break; + case MODE_PASTA: ref.type = EPOLL_TYPE_TAP_PASTA; + break; + case MODE_VU: + ref.type = EPOLL_TYPE_VHOST_CMD; + break; + } ev.events = EPOLLIN | EPOLLRDHUP; ev.data.u64 = ref.u64; diff --git a/tap.h b/tap.h index ec9e2acec460..c5447f7077eb 100644 --- a/tap.h +++ b/tap.h @@ -40,7 +40,8 @@ static inline struct iovec tap_hdr_iov(const struct ctx *c, */ static inline void tap_hdr_update(struct tap_hdr *thdr, size_t l2len) { - thdr->vnet_len = htonl(l2len); + if (thdr) + thdr->vnet_len = htonl(l2len); } void tap_udp4_send(const struct ctx *c, struct in_addr src, in_port_t sport, @@ -68,6 +69,8 @@ void tap_handler_pasta(struct ctx *c, uint32_t events, void tap_handler_passt(struct ctx *c, uint32_t events, const struct timespec *now); int tap_sock_unix_open(char *sock_path); +void tap_sock_reset(struct ctx *c); +void tap_sock_update_buf(void *base, size_t size); void tap_sock_init(struct ctx *c); void tap_flush_pools(void); void tap_handler(struct ctx *c, const struct timespec *now); diff --git a/tcp.c b/tcp.c index f9fe1b9a1330..b4b8864799a8 100644 --- a/tcp.c +++ b/tcp.c @@ -304,6 +304,7 @@ #include "flow_table.h" #include "tcp_internal.h" #include "tcp_buf.h" +#include "tcp_vu.h" /* MSS rounding: see SET_MSS() */ #define MSS_DEFAULT 536 @@ -903,6 +904,7 @@ static void tcp_fill_header(struct tcphdr *th, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: The IPv4 payload length, host order */ @@ -910,7 +912,7 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, struct tap_hdr *taph, struct iphdr *iph, struct tcphdr *th, size_t dlen, const uint16_t *check, - uint32_t seq) + uint32_t seq, bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); const struct in_addr *src4 = inany_v4(&tapside->oaddr); @@ -929,7 +931,10 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, tcp_fill_header(th, conn, seq); - tcp_update_check_tcp4(iph, th); + if (no_tcp_csum) + th->check = 0; + else + tcp_update_check_tcp4(iph, th); tap_hdr_update(taph, l3len + sizeof(struct ethhdr)); @@ -945,13 +950,14 @@ static size_t tcp_fill_headers4(const struct tcp_tap_conn *conn, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: The IPv6 payload length, host order */ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, struct tap_hdr *taph, struct ipv6hdr *ip6h, struct tcphdr *th, - size_t dlen, uint32_t seq) + size_t dlen, uint32_t seq, bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); size_t l4len = dlen + sizeof(*th); @@ -970,7 +976,10 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, tcp_fill_header(th, conn, seq); - tcp_update_check_tcp6(ip6h, th); + if (no_tcp_csum) + th->check = 0; + else + tcp_update_check_tcp6(ip6h, th); tap_hdr_update(taph, l4len + sizeof(*ip6h) + sizeof(struct ethhdr)); @@ -984,12 +993,14 @@ static size_t tcp_fill_headers6(const struct tcp_tap_conn *conn, * @dlen: TCP payload length * @check: Checksum, if already known * @seq: Sequence number for this segment + * @no_tcp_csum: Do not set TCP checksum * * Return: IP payload length, host order */ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, struct iovec *iov, size_t dlen, - const uint16_t *check, uint32_t seq) + const uint16_t *check, uint32_t seq, + bool no_tcp_csum) { const struct flowside *tapside = TAPFLOW(conn); const struct in_addr *a4 = inany_v4(&tapside->oaddr); @@ -998,13 +1009,13 @@ size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, return tcp_fill_headers4(conn, iov[TCP_IOV_TAP].iov_base, iov[TCP_IOV_IP].iov_base, iov[TCP_IOV_PAYLOAD].iov_base, dlen, - check, seq); + check, seq, no_tcp_csum); } return tcp_fill_headers6(conn, iov[TCP_IOV_TAP].iov_base, iov[TCP_IOV_IP].iov_base, iov[TCP_IOV_PAYLOAD].iov_base, dlen, - seq); + seq, no_tcp_csum); } /** @@ -1237,6 +1248,9 @@ int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, */ int tcp_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) { + if (c->mode == MODE_VU) + return tcp_vu_send_flag(c, conn, flags); + return tcp_buf_send_flag(c, conn, flags); } @@ -1630,6 +1644,9 @@ static int tcp_sock_consume(const struct tcp_tap_conn *conn, uint32_t ack_seq) */ static int tcp_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn) { + if (c->mode == MODE_VU) + return tcp_vu_data_from_sock(c, conn); + return tcp_buf_data_from_sock(c, conn); } diff --git a/tcp_buf.c b/tcp_buf.c index 1a398461a34b..10a663bdfc26 100644 --- a/tcp_buf.c +++ b/tcp_buf.c @@ -320,7 +320,7 @@ int tcp_buf_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) return ret; } - l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, optlen, NULL, seq, false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (flags & DUP_ACK) { @@ -381,7 +381,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, tcp4_frame_conns[tcp4_payload_used] = conn; iov = tcp4_l2_iov[tcp4_payload_used++]; - l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, check, seq, + false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (tcp4_payload_used > TCP_FRAMES_MEM - 1) tcp_payload_flush(c); @@ -389,7 +390,8 @@ static void tcp_data_to_tap(struct ctx *c, struct tcp_tap_conn *conn, tcp6_frame_conns[tcp6_payload_used] = conn; iov = tcp6_l2_iov[tcp6_payload_used++]; - l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq); + l4len = tcp_l2_buf_fill_headers(conn, iov, dlen, NULL, seq, + false); iov[TCP_IOV_PAYLOAD].iov_len = l4len; if (tcp6_payload_used > TCP_FRAMES_MEM - 1) tcp_payload_flush(c); diff --git a/tcp_internal.h b/tcp_internal.h index aa8bb64f1f33..e7fe735bfcb4 100644 --- a/tcp_internal.h +++ b/tcp_internal.h @@ -91,7 +91,8 @@ void tcp_rst_do(struct ctx *c, struct tcp_tap_conn *conn); size_t tcp_l2_buf_fill_headers(const struct tcp_tap_conn *conn, struct iovec *iov, size_t dlen, - const uint16_t *check, uint32_t seq); + const uint16_t *check, uint32_t seq, + bool no_tcp_csum); int tcp_update_seqack_wnd(const struct ctx *c, struct tcp_tap_conn *conn, int force_seq, struct tcp_info *tinfo); int tcp_prepare_flags(struct ctx *c, struct tcp_tap_conn *conn, int flags, diff --git a/tcp_vu.c b/tcp_vu.c new file mode 100644 index 000000000000..e3e32d628524 --- /dev/null +++ b/tcp_vu.c @@ -0,0 +1,647 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* tcp_vu.c - TCP L2 vhost-user management functions + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#include <errno.h> +#include <stddef.h> +#include <stdint.h> + +#include <netinet/ip.h> + +#include <sys/socket.h> + +#include <linux/tcp.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "ip.h" +#include "passt.h" +#include "siphash.h" +#include "inany.h" +#include "vhost_user.h" +#include "tcp.h" +#include "pcap.h" +#include "flow.h" +#include "tcp_conn.h" +#include "flow_table.h" +#include "tcp_vu.h" +#include "tcp_internal.h" +#include "checksum.h" +#include "vu_common.h" + +/** + * struct tcp_payload_t - TCP header and data to send segments with payload + * @th: TCP header + * @data: TCP data + */ +struct tcp_payload_t { + struct tcphdr th; + uint8_t data[IP_MAX_MTU - sizeof(struct tcphdr)]; +}; + +/** + * struct tcp_flags_t - TCP header and data to send zero-length + * segments (flags) + * @th: TCP header + * @opts TCP options + */ +struct tcp_flags_t { + struct tcphdr th; + char opts[OPT_MSS_LEN + OPT_WS_LEN + 1]; +}; + +static struct iovec iov_vu[VIRTQUEUE_MAX_SIZE]; +static struct vu_virtq_element elem[VIRTQUEUE_MAX_SIZE]; + +/** + * tcp_vu_l2_hdrlen() - return the size of the header in level 2 frame (TCP) + * @v6: Set for IPv6 packet + * + * Return: Return the size of the header + */ +static size_t tcp_vu_l2_hdrlen(bool v6) +{ + size_t l2_hdrlen; + + l2_hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf) + sizeof(struct ethhdr) + + sizeof(struct tcphdr); + + if (v6) + l2_hdrlen += sizeof(struct ipv6hdr); + else + l2_hdrlen += sizeof(struct iphdr); + + return l2_hdrlen; +} + +/** + * tcp_vu_pcap() - Capture a single frame to pcap file (TCP) + * @c: Execution context + * @tapside: Address information for one side of the flow + * @iov: Pointer to the array of IO vectors + * @iov_used: Length of the array + * @l4len: IPv4 Payload length + */ +static void tcp_vu_pcap(const struct ctx *c, const struct flowside *tapside,'c' should be const (unless you modify data pointed by it, but I don't see where), otherwise gcc complains: tcp.c: In function ‘tcp_send_flag’: tcp.c:1249:41: warning: passing argument 1 of ‘tcp_vu_send_flag’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1249 | return tcp_vu_send_flag(c, conn, flags); | ^ In file included from tcp.c:307: tcp_vu.h:9:34: note: expected ‘struct ctx *’ but argument is of type ‘const struct ctx *’ 9 | int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags); | ~~~~~~~~~~~~^+ struct iovec *iov, int iov_used, size_t l4len) +{ + const struct in_addr *src = inany_v4(&tapside->oaddr); + const struct in_addr *dst = inany_v4(&tapside->eaddr); + char *base = iov[0].iov_base; + size_t size = iov[0].iov_len; + struct tcp_payload_t *bp; + uint32_t sum; + + if (!*c->pcap) + return; + + if (src && dst) { + bp = vu_payloadv4(base); + sum = proto_ipv4_header_psum(l4len, IPPROTO_TCP, + *src, *dst); + } else { + bp = vu_payloadv6(base); + sum = proto_ipv6_header_psum(l4len, IPPROTO_TCP, + &tapside->oaddr.a6, + &tapside->eaddr.a6); + } + iov[0].iov_base = &bp->th; + iov[0].iov_len = size - ((char *)iov[0].iov_base - base); + bp->th.check = 0; + bp->th.check = csum_iov(iov, iov_used, sum); + + /* set iov for pcap logging */ + iov[0].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + iov[0].iov_len = size - sizeof(struct virtio_net_hdr_mrg_rxbuf); + + pcap_iov(iov, iov_used); + + /* restore iov[0] */ + iov[0].iov_base = base; + iov[0].iov_len = size; +} + +/** + * tcp_vu_send_flag() - Send segment with flags to vhost-user (no payload) + * @c: Execution context + * @conn: Connection pointer + * @flags: TCP flags: if not set, send segment only if ACK is due + * + * Return: negative error code on connection reset, 0 otherwise + */ +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *tapside = TAPFLOW(conn); + struct virtio_net_hdr_mrg_rxbuf *vh; + struct iovec l2_iov[TCP_NUM_IOVS]; + size_t l2len, l4len, optlen; + struct iovec in_sg; + struct ethhdr *eh; + int nb_ack; + int ret; + + elem[0].out_num = 0; + elem[0].out_sg = NULL; + elem[0].in_num = 1; + elem[0].in_sg = &in_sg; + ret = vu_queue_pop(vdev, vq, &elem[0]); + if (ret < 0) + return 0; + + if (elem[0].in_num < 1) { + debug("virtio-net receive queue contains no in buffers"); + vu_queue_rewind(vq, 1); + return 0; + } + + vh = elem[0].in_sg[0].iov_base; + + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(1); + + l2_iov[TCP_IOV_TAP].iov_base = NULL; + l2_iov[TCP_IOV_TAP].iov_len = 0; + l2_iov[TCP_IOV_ETH].iov_base = (char *)elem[0].in_sg[0].iov_base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr); + + eh = l2_iov[TCP_IOV_ETH].iov_base; + + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); + + if (CONN_V4(conn)) { + struct tcp_flags_t *payload; + struct iphdr *iph; + uint32_t seq; + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + eh->h_proto = htons(ETH_P_IP); + + iph = l2_iov[TCP_IOV_IP].iov_base; + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); + + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_flags_t, opts) / 4, + .ack = 1 + }; + + seq = conn->seq_to_tap; + ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen); + if (ret <= 0) { + vu_queue_rewind(vq, 1); + return ret; + } + + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq, + true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + + l2len = l4len + sizeof(*iph) + sizeof(struct ethhdr); + } else { + struct tcp_flags_t *payload; + struct ipv6hdr *ip6h; + uint32_t seq; + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + eh->h_proto = htons(ETH_P_IPV6); + + ip6h = l2_iov[TCP_IOV_IP].iov_base; + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); + + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_flags_t, opts) / 4, + .ack = 1 + }; + + seq = conn->seq_to_tap; + ret = tcp_prepare_flags(c, conn, flags, &payload->th, payload->opts, &optlen); + if (ret <= 0) { + vu_queue_rewind(vq, 1); + return ret; + } + + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, optlen, NULL, seq, + true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + + l2len = l4len + sizeof(*ip6h) + sizeof(struct ethhdr); + } + l2len += sizeof(struct virtio_net_hdr_mrg_rxbuf); + ASSERT(l2len <= elem[0].in_sg[0].iov_len); + + elem[0].in_sg[0].iov_len = l2len; + tcp_vu_pcap(c, tapside, &elem[0].in_sg[0], 1, l4len); + + vu_queue_fill(vq, &elem[0], l2len, 0); + nb_ack = 1;It took me a while to understand this, I guess "nb" means "number" (of ACKs) but you set this to one regardless of whether you send any ACK segment (also on SYN). What about 'count', 'segs', 'seg_count', 'pkt_count'...?+ + if (flags & DUP_ACK) { + struct iovec in_sg_dup; + + elem[1].out_num = 0; + elem[1].out_sg = NULL; + elem[1].in_num = 1; + elem[1].in_sg = &in_sg_dup; + ret = vu_queue_pop(vdev, vq, &elem[1]); + if (ret == 0) { + if (elem[1].in_num < 1 || elem[1].in_sg[0].iov_len < l2len) { + vu_queue_rewind(vq, 1); + } else { + memcpy(elem[1].in_sg[0].iov_base, vh, l2len); + nb_ack++; + + tcp_vu_pcap(c, tapside, &elem[1].in_sg[0], 1, + l4len); + + vu_queue_fill(vq, &elem[1], l2len, 1); + } + } + } + + vu_queue_flush(vq, nb_ack);By the way, the comment to vu_queue_flush() is also a bit misleading, it says "Number of entry to flush", which makes it look like an index, while it should say "Number of entries to flush".+ vu_queue_notify(vdev, vq); + + return 0; +} + +/** tcp_vu_sock_recv() - Receive datastream from socket into vhost-user buffers + * @c: Execution context + * @conn: Connection pointer + * @v4: Set for IPv4 connections + * @fillsize: Number of bytes we can receive + * @datalen: Size of received data (output) + * + * Return: Number of iov entries used to store the data, negative on failure+ */ +static ssize_t tcp_vu_sock_recv(struct ctx *c, + struct tcp_tap_conn *conn, bool v4, + size_t fillsize, ssize_t *data_len) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; + struct msghdr mh_sock = { 0 }; + uint16_t mss = MSS_GET(conn); + static int in_sg_count; + int s = conn->sock; + size_t l2_hdrlen; + int segment_size; + int iov_cnt; + ssize_t ret; + + l2_hdrlen = tcp_vu_l2_hdrlen(!v4); + + iov_cnt = 0; + in_sg_count = 0; + segment_size = 0; + *data_len = 0; + while (fillsize > 0 && iov_cnt < VIRTQUEUE_MAX_SIZE - 1 &&I couldn't figure out why this needs to be less than VIRTQUEUE_MAX_SIZE - 1: do we need to leave one free slot in the queue?+ in_sg_count < ARRAY_SIZE(in_sg)) {As you're assuming elem[iov_cnt].in_num == 1, this will always stop at in_sg_count < ARRAY_SIZE(in_sg) - 1. I'm not sure if it's intended.+ elem[iov_cnt].out_num = 0; + elem[iov_cnt].out_sg = NULL; + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); + if (ret < 0) + break; + + if (elem[iov_cnt].in_num < 1) { + warn("virtio-net receive queue contains no in buffers"); + break; + } + + in_sg_count += elem[iov_cnt].in_num; + + ASSERT(elem[iov_cnt].in_num == 1); + ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen);Both would terminate passt on an issue from the hypervisor from which we could probably recover. I guess those should be err() and break.+ + if (segment_size == 0) { + iov_vu[iov_cnt + 1].iov_base = + (char *)elem[iov_cnt].in_sg[0].iov_base + l2_hdrlen; + iov_vu[iov_cnt + 1].iov_len = + elem[iov_cnt].in_sg[0].iov_len - l2_hdrlen; + } else { + iov_vu[iov_cnt + 1].iov_base = elem[iov_cnt].in_sg[0].iov_base; + iov_vu[iov_cnt + 1].iov_len = elem[iov_cnt].in_sg[0].iov_len; + } + + if (iov_vu[iov_cnt + 1].iov_len > fillsize) + iov_vu[iov_cnt + 1].iov_len = fillsize; + + segment_size += iov_vu[iov_cnt + 1].iov_len; + if (!vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { + segment_size = 0; + } else if (segment_size >= mss) { + iov_vu[iov_cnt + 1].iov_len -= segment_size - mss; + segment_size = 0; + } + fillsize -= iov_vu[iov_cnt + 1].iov_len; + + iov_cnt++; + } + if (iov_cnt == 0) + return 0; + + mh_sock.msg_iov = iov_vu; + mh_sock.msg_iovlen = iov_cnt + 1;I guess this should also change along with the check on peek_offset_cap (David's comment).+ + do + ret = recvmsg(s, &mh_sock, MSG_PEEK); + while (ret < 0 && errno == EINTR); + + if (ret < 0) { + vu_queue_rewind(vq, iov_cnt); + if (errno != EAGAIN && errno != EWOULDBLOCK) { + ret = -errno; + tcp_rst(c, conn); + } + return ret; + } + if (!ret) { + vu_queue_rewind(vq, iov_cnt); + + if ((conn->events & (SOCK_FIN_RCVD | TAP_FIN_SENT)) == SOCK_FIN_RCVD) { + int retf = tcp_vu_send_flag(c, conn, FIN | ACK); + if (retf) { + tcp_rst(c, conn); + return retf; + } + + conn_event(c, conn, TAP_FIN_SENT); + } + return 0; + } + + *data_len = ret; + return iov_cnt;On end-of-file, we return 0, as expected: no entries were used. But otherwise, if recvmsg() returns a value that's less than iov_cnt, you still return iov_cnt: is that intended? If yes, it doesn't fit with the comment to this function.+} + +/** + * tcp_vu_prepare() - Prepare the packet header + * @c: Execution context + * @conn: Connection pointer + * @first: Pointer to the array of IO vectors + * @data_len: Packet data length...this is the payload length, I suppose? This is called segment_size in the caller, so I guess it's that. But then we should call it 'dlen', see commit 5566386f5f11 ("treewide: Standardise variable names for various packet lengths"). By the way, we should probably copy the table from that commit message (with s/plen/dlen/) somewhere in the code, at some point.+ * @check: Checksum, if already known + * + * Return: Level-4 lengthLayer. I would call this "IPv4 payload length" for clarity.+ */ +static size_t tcp_vu_prepare(const struct ctx *c, + struct tcp_tap_conn *conn, struct iovec *first, + size_t data_len, const uint16_t **check) +{ + const struct flowside *toside = TAPFLOW(conn); + struct iovec l2_iov[TCP_NUM_IOVS]; + char *base = first->iov_base; + struct ethhdr *eh; + size_t l4len; + + /* we guess the first iovec provided by the guest can embed + * all the headers needed by L2 frame + */ + + l2_iov[TCP_IOV_TAP].iov_base = NULL; + l2_iov[TCP_IOV_TAP].iov_len = 0; + l2_iov[TCP_IOV_ETH].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + l2_iov[TCP_IOV_ETH].iov_len = sizeof(struct ethhdr); + + eh = l2_iov[TCP_IOV_ETH].iov_base; + + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); + + /* initialize header */ + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { + struct tcp_payload_t *payload; + struct iphdr *iph; + + ASSERT(first[0].iov_len >= sizeof(struct virtio_net_hdr_mrg_rxbuf) + + sizeof(struct ethhdr) + sizeof(struct iphdr) + + sizeof(struct tcphdr)); + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct iphdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + + eh->h_proto = htons(ETH_P_IP); + + iph = l2_iov[TCP_IOV_IP].iov_base; + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_TCP); + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_payload_t, data) / 4, + .ack = 1 + }; + + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, *check, + conn->seq_to_tap, true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + + *check = &iph->check; + } else { + struct tcp_payload_t *payload; + struct ipv6hdr *ip6h; + + ASSERT(first[0].iov_len >= sizeof(struct virtio_net_hdr_mrg_rxbuf) + + sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + + sizeof(struct tcphdr)); + + l2_iov[TCP_IOV_IP].iov_base = (char *)l2_iov[TCP_IOV_ETH].iov_base + + l2_iov[TCP_IOV_ETH].iov_len; + l2_iov[TCP_IOV_IP].iov_len = sizeof(struct ipv6hdr); + l2_iov[TCP_IOV_PAYLOAD].iov_base = (char *)l2_iov[TCP_IOV_IP].iov_base + + l2_iov[TCP_IOV_IP].iov_len; + + + eh->h_proto = htons(ETH_P_IPV6); + + ip6h = l2_iov[TCP_IOV_IP].iov_base; + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_TCP); + + payload = l2_iov[TCP_IOV_PAYLOAD].iov_base; + payload->th = (struct tcphdr){ + .doff = offsetof(struct tcp_payload_t, data) / 4, + .ack = 1 + }; +; + l4len = tcp_l2_buf_fill_headers(conn, l2_iov, data_len, NULL, + conn->seq_to_tap, true); + /* keep the following assignment for clarity */ + /* cppcheck-suppress unreadVariable */ + l2_iov[TCP_IOV_PAYLOAD].iov_len = l4len; + } + + return l4len; +} + +/** + * tcp_vu_data_from_sock() - Handle new data from socket, queue to vhost-user, + * in window + * @c: Execution context + * @conn: Connection pointer + * + * Return: Negative on connection reset, 0 otherwise + */ +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn)gcc isn't happy with this one either, I don't see where you modify pointed data? tcp.c: In function ‘tcp_data_from_sock’: tcp.c:1645:46: warning: passing argument 1 of ‘tcp_vu_data_from_sock’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1645 | return tcp_vu_data_from_sock(c, conn); | ^ tcp_vu.h:10:39: note: expected ‘struct ctx *’ but argument is of type ‘const struct ctx *’ 10 | int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn); | ~~~~~~~~~~~~^+{ + uint32_t wnd_scaled = conn->wnd_from_tap << conn->ws_from_tap; + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *tapside = TAPFLOW(conn); + uint16_t mss = MSS_GET(conn); + size_t l2_hdrlen, fillsize; + int i, iov_cnt, iov_used; + int v4 = CONN_V4(conn); + uint32_t already_sent = 0; + const uint16_t *check; + struct iovec *first; + int segment_size; + int num_buffers; + ssize_t len; + + if (!vu_queue_enabled(vq) || !vu_queue_started(vq)) { + flow_err(conn, + "Got packet, but RX virtqueue not usable yet"); + return 0; + } + + already_sent = conn->seq_to_tap - conn->seq_ack_from_tap; + + if (SEQ_LT(already_sent, 0)) { + /* RFC 761, section 2.1. */ + flow_trace(conn, "ACK sequence gap: ACK for %u, sent: %u", + conn->seq_ack_from_tap, conn->seq_to_tap); + conn->seq_to_tap = conn->seq_ack_from_tap; + already_sent = 0; + } + + if (!wnd_scaled || already_sent >= wnd_scaled) { + conn_flag(c, conn, STALLED); + conn_flag(c, conn, ACK_FROM_TAP_DUE); + return 0; + } + + /* Set up buffer descriptors we'll fill completely and partially. */ + + fillsize = wnd_scaled; + + if (peek_offset_cap) + already_sent = 0; + + iov_vu[0].iov_base = tcp_buf_discard; + iov_vu[0].iov_len = already_sent; + fillsize -= already_sent; + + /* collect the buffers from vhost-user and fill them with the + * data from the socket + */ + iov_cnt = tcp_vu_sock_recv(c, conn, v4, fillsize, &len); + if (iov_cnt <= 0) + return iov_cnt; + + len -= already_sent; + if (len <= 0) { + conn_flag(c, conn, STALLED); + vu_queue_rewind(vq, iov_cnt); + return 0; + } + + conn_flag(c, conn, ~STALLED); + + /* Likely, some new data was acked too. */ + tcp_update_seqack_wnd(c, conn, 0, NULL); + + /* initialize headers */ + l2_hdrlen = tcp_vu_l2_hdrlen(!v4); + iov_used = 0; + num_buffers = 0; + check = NULL; + segment_size = 0; + + /* iov_vu is an array of buffers and the buffer size can be + * smaller than the segment size we want to use but with + * num_buffer we can merge several virtio iov buffers in one packet + * we need only to set the packet headers in the first iov and + * num_buffer to the number of iov entriesWait, what? :) s/packet/packet./ and s/we/We/ should make this more readable. What do you mean by "with num_buffer"? Does that refer to VIRTIO_NET_F_MRG_RXBUF?+ */ + for (i = 0; i < iov_cnt && len; i++) { + + if (segment_size == 0) + first = &iov_vu[i + 1]; + + if (iov_vu[i + 1].iov_len > (size_t)len) + iov_vu[i + 1].iov_len = len; + + len -= iov_vu[i + 1].iov_len; + iov_used++; + + segment_size += iov_vu[i + 1].iov_len; + num_buffers++; + + if (segment_size >= mss || len == 0 ||Shouldn't we stop just _before_ exceeding the MSS? Here it looks like we decide to prepare a frame after we did (plus some other conditions), instead of having a look at the next item to see if it can also fit.+ i + 1 == iov_cnt || !vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { + struct virtio_net_hdr_mrg_rxbuf *vh; + size_t l4len; + + if (i + 1 == iov_cnt) + check = NULL; + + /* restore first iovec base: point to vnet header */ + first->iov_base = (char *)first->iov_base - l2_hdrlen; + first->iov_len = first->iov_len + l2_hdrlen; + + vh = first->iov_base; + + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(num_buffers); + + l4len = tcp_vu_prepare(c, conn, first, segment_size, &check); + + tcp_vu_pcap(c, tapside, first, num_buffers, l4len); + + conn->seq_to_tap += segment_size; + + segment_size = 0; + num_buffers = 0; + } + } + + /* release unused buffers */ + vu_queue_rewind(vq, iov_cnt - iov_used); + + /* send packets */ + vu_send_frame(vdev, vq, elem, &iov_vu[1], iov_used); + + conn_flag(c, conn, ACK_FROM_TAP_DUE); + + return 0; +} diff --git a/tcp_vu.h b/tcp_vu.h new file mode 100644 index 000000000000..b433c3e0d06f --- /dev/null +++ b/tcp_vu.h @@ -0,0 +1,12 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef TCP_VU_H +#define TCP_VU_H + +int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags); +int tcp_vu_data_from_sock(struct ctx *c, struct tcp_tap_conn *conn); + +#endif /*TCP_VU_H */ diff --git a/udp.c b/udp.c index 2ba00c9c20a8..f7b5b5eb6421 100644 --- a/udp.c +++ b/udp.c @@ -109,8 +109,7 @@ #include "pcap.h" #include "log.h" #include "flow_table.h" - -#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ +#include "udp_internal.h" /* "Spliced" sockets indexed by bound port (host order) */ static int udp_splice_ns [IP_VERSIONS][NUM_PORTS]; @@ -118,20 +117,8 @@ static int udp_splice_init[IP_VERSIONS][NUM_PORTS]; /* Static buffers */ -/** - * struct udp_payload_t - UDP header and data for inbound messages - * @uh: UDP header - * @data: UDP data - */ -static struct udp_payload_t { - struct udphdr uh; - char data[USHRT_MAX - sizeof(struct udphdr)]; -#ifdef __AVX2__ -} __attribute__ ((packed, aligned(32))) -#else -} __attribute__ ((packed, aligned(__alignof__(unsigned int)))) -#endif -udp_payload[UDP_MAX_FRAMES]; +/* UDP header and data for inbound messages */ +static struct udp_payload_t udp_payload[UDP_MAX_FRAMES]; /* Ethernet header for IPv4 frames */ static struct ethhdr udp4_eth_hdr; @@ -298,11 +285,13 @@ static void udp_splice_send(const struct ctx *c, size_t start, size_t n, * @bp: Pointer to udp_payload_t to update * @toside: Flowside for destination side * @dlen: Length of UDP payload + * @no_udp_csum: Do not set UPD checksum * * Return: size of IPv4 payload (UDP header + data) */ -static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, - const struct flowside *toside, size_t dlen) +size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum) { const struct in_addr *src = inany_v4(&toside->oaddr); const struct in_addr *dst = inany_v4(&toside->eaddr); @@ -319,7 +308,10 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, bp->uh.source = htons(toside->oport); bp->uh.dest = htons(toside->eport); bp->uh.len = htons(l4len); - csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); + if (no_udp_csum) + bp->uh.check = 0; + else + csum_udp4(&bp->uh, *src, *dst, bp->data, dlen); return l4len; } @@ -330,11 +322,13 @@ static size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, * @bp: Pointer to udp_payload_t to update * @toside: Flowside for destination side * @dlen: Length of UDP payload + * @no_udp_csum: Do not set UPD checksum * * Return: size of IPv6 payload (UDP header + data) */ -static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, - const struct flowside *toside, size_t dlen) +size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum) { uint16_t l4len = dlen + sizeof(bp->uh); @@ -348,7 +342,16 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, bp->uh.source = htons(toside->oport); bp->uh.dest = htons(toside->eport); bp->uh.len = ip6h->payload_len; - csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, bp->data, dlen); + if (no_udp_csum) { + /* O is an invalid checksum for UDP IPv6 and dropped by + * the kernel stack, even if the checksum is disabled by virtio + * flags. We need to put any non-zero value here. + */ + bp->uh.check = 0xffff; + } else { + csum_udp6(&bp->uh, &toside->oaddr.a6, &toside->eaddr.a6, + bp->data, dlen); + } return l4len; } @@ -358,9 +361,11 @@ static size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, * @mmh: Receiving mmsghdr array * @idx: Index of the datagram to prepare * @toside: Flowside for destination side + * @no_udp_csum: Do not set UPD checksum */ -static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, - const struct flowside *toside) +static void udp_tap_prepare(const struct mmsghdr *mmh, + unsigned idx, const struct flowside *toside, + bool no_udp_csum) { struct iovec (*tap_iov)[UDP_NUM_IOVS] = &udp_l2_iov[idx]; struct udp_payload_t *bp = &udp_payload[idx]; @@ -368,13 +373,15 @@ static void udp_tap_prepare(const struct mmsghdr *mmh, unsigned idx, size_t l4len; if (!inany_v4(&toside->eaddr) || !inany_v4(&toside->oaddr)) { - l4len = udp_update_hdr6(&bm->ip6h, bp, toside, mmh[idx].msg_len); + l4len = udp_update_hdr6(&bm->ip6h, bp, toside, + mmh[idx].msg_len, no_udp_csum); tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip6h) + sizeof(udp6_eth_hdr)); (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp6_eth_hdr); (*tap_iov)[UDP_IOV_IP] = IOV_OF_LVALUE(bm->ip6h); } else { - l4len = udp_update_hdr4(&bm->ip4h, bp, toside, mmh[idx].msg_len); + l4len = udp_update_hdr4(&bm->ip4h, bp, toside, + mmh[idx].msg_len, no_udp_csum); tap_hdr_update(&bm->taph, l4len + sizeof(bm->ip4h) + sizeof(udp4_eth_hdr)); (*tap_iov)[UDP_IOV_ETH] = IOV_OF_LVALUE(udp4_eth_hdr); @@ -447,7 +454,7 @@ static int udp_sock_recverr(int s) * * Return: Number of errors handled, or < 0 if we have an unrecoverable error */ -static int udp_sock_errs(const struct ctx *c, int s, uint32_t events) +int udp_sock_errs(const struct ctx *c, int s, uint32_t events) { unsigned n_err = 0; socklen_t errlen; @@ -524,7 +531,7 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, } /** - * udp_listen_sock_handler() - Handle new data from socket + * udp_buf_listen_sock_handler() - Handle new data from socket * @c: Execution context * @ref: epoll reference * @events: epoll events bitmap @@ -532,8 +539,8 @@ static int udp_sock_recv(const struct ctx *c, int s, uint32_t events, * * #syscalls recvmmsg */ -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now) +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) { const socklen_t sasize = sizeof(udp_meta[0].s_in); int n, i; @@ -565,7 +572,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, udp_splice_prepare(udp_mh_recv, i); } else if (batchpif == PIF_TAP) { udp_tap_prepare(udp_mh_recv, i, - flowside_at_sidx(batchsidx)); + flowside_at_sidx(batchsidx), + false); } if (++i >= n) @@ -599,7 +607,7 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, } /** - * udp_reply_sock_handler() - Handle new data from flow specific socket + * udp_buf_reply_sock_handler() - Handle new data from flow specific socket * @c: Execution context * @ref: epoll reference * @events: epoll events bitmap @@ -607,8 +615,8 @@ void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, * * #syscalls recvmmsg */ -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now) +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) { flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); const struct flowside *toside = flowside_at_sidx(tosidx); @@ -636,7 +644,7 @@ void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, if (pif_is_socket(topif)) udp_splice_prepare(udp_mh_recv, i); else if (topif == PIF_TAP) - udp_tap_prepare(udp_mh_recv, i, toside); + udp_tap_prepare(udp_mh_recv, i, toside, false); /* Restore sockaddr length clobbered by recvmsg() */ udp_mh_recv[i].msg_hdr.msg_namelen = sizeof(udp_meta[i].s_in); } diff --git a/udp.h b/udp.h index a8e76bfe8f37..ea23fb36b637 100644 --- a/udp.h +++ b/udp.h @@ -9,10 +9,10 @@ #define UDP_TIMER_INTERVAL 1000 /* ms */ void udp_portmap_clear(void); -void udp_listen_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now); -void udp_reply_sock_handler(const struct ctx *c, union epoll_ref ref, - uint32_t events, const struct timespec *now); +void udp_buf_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +void udp_buf_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); int udp_tap_handler(const struct ctx *c, uint8_t pif, sa_family_t af, const void *saddr, const void *daddr, const struct pool *p, int idx, const struct timespec *now); diff --git a/udp_internal.h b/udp_internal.h new file mode 100644 index 000000000000..cc80e3055423 --- /dev/null +++ b/udp_internal.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later + * Copyright (c) 2021 Red Hat GmbH + * Author: Stefano Brivio <sbrivio(a)redhat.com> + */ + +#ifndef UDP_INTERNAL_H +#define UDP_INTERNAL_H + +#include "tap.h" /* needed by udp_meta_t */ + +#define UDP_MAX_FRAMES 32 /* max # of frames to receive at once */ + +/** + * struct udp_payload_t - UDP header and data for inbound messages + * @uh: UDP header + * @data: UDP data + */ +struct udp_payload_t { + struct udphdr uh; + char data[USHRT_MAX - sizeof(struct udphdr)]; +#ifdef __AVX2__ +} __attribute__ ((packed, aligned(32))); +#else +} __attribute__ ((packed, aligned(__alignof__(unsigned int)))); +#endif + +size_t udp_update_hdr4(struct iphdr *ip4h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum); +size_t udp_update_hdr6(struct ipv6hdr *ip6h, struct udp_payload_t *bp, + const struct flowside *toside, size_t dlen, + bool no_udp_csum); +int udp_sock_errs(const struct ctx *c, int s, uint32_t events); +#endif /* UDP_INTERNAL_H */ diff --git a/udp_vu.c b/udp_vu.c new file mode 100644 index 000000000000..fa390dec994a --- /dev/null +++ b/udp_vu.c @@ -0,0 +1,397 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* udp_vu.c - UDP L2 vhost-user management functions + * + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#include <unistd.h> +#include <assert.h> +#include <net/ethernet.h> +#include <net/if.h> +#include <netinet/in.h> +#include <netinet/ip.h> +#include <netinet/udp.h> +#include <stdint.h> +#include <stddef.h> +#include <sys/uio.h> +#include <linux/virtio_net.h> + +#include "checksum.h" +#include "util.h" +#include "ip.h" +#include "siphash.h" +#include "inany.h" +#include "passt.h" +#include "pcap.h" +#include "log.h" +#include "vhost_user.h" +#include "udp_internal.h" +#include "flow.h" +#include "flow_table.h" +#include "udp_flow.h" +#include "udp_vu.h" +#include "vu_common.h" + +static struct iovec iov_vu [VIRTQUEUE_MAX_SIZE]; +static struct vu_virtq_element elem [VIRTQUEUE_MAX_SIZE];Why these spaces and tabs if things are not aligned anyway? It makes it a bit difficult to read.+static struct iovec in_sg[VIRTQUEUE_MAX_SIZE]; +static int in_sg_count; + +/** + * udp_vu_l2_hdrlen() - return the size of the header in level 2 frame (UDP)layer. But it's actually the sum of all headers, up to Layer-4?+ * @v6: Set for IPv6 packet + * + * Return: Return the size of the header + */ +static size_t udp_vu_l2_hdrlen(bool v6) +{ + size_t l2_hdrlen; + + l2_hdrlen = sizeof(struct virtio_net_hdr_mrg_rxbuf) + sizeof(struct ethhdr) + + sizeof(struct udphdr); + + if (v6) + l2_hdrlen += sizeof(struct ipv6hdr); + else + l2_hdrlen += sizeof(struct iphdr); + + return l2_hdrlen; +} + +static int udp_vu_sock_init(int s, union sockaddr_inany *s_in) +{ + struct msghdr msg = { + .msg_name = s_in, + .msg_namelen = sizeof(union sockaddr_inany), + }; + + return recvmsg(s, &msg, MSG_PEEK | MSG_DONTWAIT); +} + +/** + * udp_vu_sock_recv() - Receive datagrams from socket into vhost-user buffers + * @c: Execution context + * @s: Socket to receive from + * @events: epoll events bitmap + * @v6: Set for IPv6 connections + * @datalen: Size of received data (output) + * + * Return: Number of iov entries used to store the datagram + */ +static int udp_vu_sock_recv(const struct ctx *c, int s, uint32_t events, + bool v6, ssize_t *data_len) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + int virtqueue_max, iov_cnt, idx, iov_used; + size_t fillsize, size, off, l2_hdrlen; + struct virtio_net_hdr_mrg_rxbuf *vh; + struct msghdr msg = { 0 }; + char *base; + + ASSERT(!c->no_udp); + + if (!(events & EPOLLIN)) + return 0; + + /* compute L2 header length */...this is not related to virtqueue_max and VIRTIO_NET_F_MRG_RXBUF, right?+ + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + virtqueue_max = VIRTQUEUE_MAX_SIZE; + else + virtqueue_max = 1; + + l2_hdrlen = udp_vu_l2_hdrlen(v6); + + fillsize = USHRT_MAX; + iov_cnt = 0; + in_sg_count = 0; + while (fillsize && iov_cnt < virtqueue_max && + in_sg_count < ARRAY_SIZE(in_sg)) { + int ret; + + elem[iov_cnt].out_num = 0; + elem[iov_cnt].out_sg = NULL; + elem[iov_cnt].in_num = ARRAY_SIZE(in_sg) - in_sg_count; + elem[iov_cnt].in_sg = &in_sg[in_sg_count]; + ret = vu_queue_pop(vdev, vq, &elem[iov_cnt]); + if (ret < 0) + break; + in_sg_count += elem[iov_cnt].in_num; + + if (elem[iov_cnt].in_num < 1) { + err("virtio-net receive queue contains no in buffers"); + vu_queue_rewind(vq, iov_cnt); + return 0; + } + ASSERT(elem[iov_cnt].in_num == 1); + ASSERT(elem[iov_cnt].in_sg[0].iov_len >= l2_hdrlen); + + if (iov_cnt == 0) { + base = elem[iov_cnt].in_sg[0].iov_base; + size = elem[iov_cnt].in_sg[0].iov_len; + + /* keep space for the headers */ + iov_vu[0].iov_base = base + l2_hdrlen; + iov_vu[0].iov_len = size - l2_hdrlen; + } else { + iov_vu[iov_cnt].iov_base = elem[iov_cnt].in_sg[0].iov_base; + iov_vu[iov_cnt].iov_len = elem[iov_cnt].in_sg[0].iov_len; + } + + if (iov_vu[iov_cnt].iov_len > fillsize) + iov_vu[iov_cnt].iov_len = fillsize; + + fillsize -= iov_vu[iov_cnt].iov_len; + + iov_cnt++; + } + if (iov_cnt == 0) + return 0; + + msg.msg_iov = iov_vu; + msg.msg_iovlen = iov_cnt; + + *data_len = recvmsg(s, &msg, 0); + if (*data_len < 0) { + vu_queue_rewind(vq, iov_cnt); + return 0; + } + + /* restore original values */ + iov_vu[0].iov_base = base; + iov_vu[0].iov_len = size; + + /* count the numbers of buffer filled by recvmsg() */ + idx = iov_skip_bytes(iov_vu, iov_cnt, l2_hdrlen + *data_len, + &off); + /* adjust last iov length */ + if (idx < iov_cnt) + iov_vu[idx].iov_len = off; + iov_used = idx + !!off; + + /* release unused buffers */ + vu_queue_rewind(vq, iov_cnt - iov_used); + + vh = (struct virtio_net_hdr_mrg_rxbuf *)base; + vh->hdr = VU_HEADER; + if (vu_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + vh->num_buffers = htole16(iov_used); + + return iov_used; +} + +/** + * udp_vu_prepare() - Prepare the packet header + * @c: Execution context + * @toside: Address information for one side of the flow + * @datalen: Packet data length + * + * Return:i Level-4 lengthSame as above.+ */ +static size_t udp_vu_prepare(const struct ctx *c, + const struct flowside *toside, ssize_t data_len) +{ + struct ethhdr *eh; + size_t l4len; + + /* ethernet header */ + eh = vu_eth(iov_vu[0].iov_base); + + memcpy(eh->h_dest, c->guest_mac, sizeof(eh->h_dest)); + memcpy(eh->h_source, c->our_tap_mac, sizeof(eh->h_source)); + + /* initialize header */ + if (inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)) { + struct iphdr *iph = vu_ip(iov_vu[0].iov_base); + struct udp_payload_t *bp = vu_payloadv4(iov_vu[0].iov_base); + + eh->h_proto = htons(ETH_P_IP); + + *iph = (struct iphdr)L2_BUF_IP4_INIT(IPPROTO_UDP); + + l4len = udp_update_hdr4(iph, bp, toside, data_len, true); + } else { + struct ipv6hdr *ip6h = vu_ip(iov_vu[0].iov_base); + struct udp_payload_t *bp = vu_payloadv6(iov_vu[0].iov_base); + + eh->h_proto = htons(ETH_P_IPV6); + + *ip6h = (struct ipv6hdr)L2_BUF_IP6_INIT(IPPROTO_UDP); + + l4len = udp_update_hdr6(ip6h, bp, toside, data_len, true); + } + + return l4len; +} + +/** + * udp_vu_pcap() - Capture a single frame to pcap file (UDP) + * @c: Execution context + * @toside: ddress information for one side of the flowaddress+ * @l4len: IPv4 Payload length + * @iov_used: Length of the array + */ +static void udp_vu_pcap(const struct ctx *c, const struct flowside *toside, + size_t l4len, int iov_used) +{ + const struct in_addr *src4 = inany_v4(&toside->oaddr); + const struct in_addr *dst4 = inany_v4(&toside->eaddr); + char *base = iov_vu[0].iov_base; + size_t size = iov_vu[0].iov_len; + struct udp_payload_t *bp; + uint32_t sum; + + if (!*c->pcap) + return; + + if (src4 && dst4) { + bp = vu_payloadv4(base); + sum = proto_ipv4_header_psum(l4len, IPPROTO_UDP, *src4, *dst4); + } else { + bp = vu_payloadv6(base); + sum = proto_ipv6_header_psum(l4len, IPPROTO_UDP, + &toside->oaddr.a6, + &toside->eaddr.a6); + bp->uh.check = 0; /* by default, set to 0xffff */ + } + + iov_vu[0].iov_base = &bp->uh; + iov_vu[0].iov_len = size - ((char *)iov_vu[0].iov_base - base); + + bp->uh.check = csum_iov(iov_vu, iov_used, sum); + + /* set iov for pcap logging */ + iov_vu[0].iov_base = base + sizeof(struct virtio_net_hdr_mrg_rxbuf); + iov_vu[0].iov_len = size - sizeof(struct virtio_net_hdr_mrg_rxbuf); + pcap_iov(iov_vu, iov_used); + + /* restore iov_vu[0] */ + iov_vu[0].iov_base = base; + iov_vu[0].iov_len = size; +} + +/** + * udp_vu_listen_sock_handler() - Handle new data from socket + * @c: Execution context + * @ref: epoll reference + * @events: epoll events bitmap + * @now: Current timestamp + */ +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) +{ + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + const struct flowside *toside; + union sockaddr_inany s_in; + flow_sidx_t batchsidx; + uint8_t batchpif; + bool v6; + int i; + + if (udp_sock_errs(c, ref.fd, events) < 0) { + err("UDP: Unrecoverable error on listening socket:" + " (%s port %hu)", pif_name(ref.udp.pif), ref.udp.port); + return; + } + + if (udp_vu_sock_init(ref.fd, &s_in) < 0) + return; + + batchsidx = udp_flow_from_sock(c, ref, &s_in, now); + batchpif = pif_at_sidx(batchsidx); + + if (batchpif != PIF_TAP) { + if (flow_sidx_valid(batchsidx)) { + flow_sidx_t fromsidx = flow_sidx_opposite(batchsidx); + struct udp_flow *uflow = udp_at_sidx(batchsidx); + + flow_err(uflow, + "No support for forwarding UDP from %s to %s", + pif_name(pif_at_sidx(fromsidx)), + pif_name(batchpif)); + } else { + debug("Discarding 1 datagram without flow"); + } + + return; + } + + toside = flowside_at_sidx(batchsidx); + + v6 = !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); + + for (i = 0; i < UDP_MAX_FRAMES; i++) { + ssize_t data_len; + size_t l4len; + int iov_used; + + iov_used = udp_vu_sock_recv(c, ref.fd, events, v6, &data_len); + if (iov_used <= 0) + return; + + l4len = udp_vu_prepare(c, toside, data_len); + udp_vu_pcap(c, toside, l4len, iov_used); + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); + } +} + +/** + * udp_vu_reply_sock_handler() - Handle new data from flow specific socket + * @c: Execution context + * @ref: epoll reference + * @events: epoll events bitmap + * @now: Current timestamp + */ +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now) +{ + flow_sidx_t tosidx = flow_sidx_opposite(ref.flowside); + const struct flowside *toside = flowside_at_sidx(tosidx); + struct vu_dev *vdev = c->vdev; + struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; + struct udp_flow *uflow = udp_at_sidx(ref.flowside); + int from_s = uflow->s[ref.flowside.sidei]; + uint8_t topif = pif_at_sidx(tosidx); + bool v6; + int i; + + ASSERT(!c->no_udp); + ASSERT(uflow); + + if (udp_sock_errs(c, from_s, events) < 0) { + flow_err(uflow, "Unrecoverable error on reply socket"); + flow_err_details(uflow); + udp_flow_close(c, uflow); + return; + } + + if (topif != PIF_TAP) { + uint8_t frompif = pif_at_sidx(ref.flowside); + + flow_err(uflow, + "No support for forwarding UDP from %s to %s", + pif_name(frompif), pif_name(topif)); + return; + } + + v6 = !(inany_v4(&toside->eaddr) && inany_v4(&toside->oaddr)); + + for (i = 0; i < UDP_MAX_FRAMES; i++) { + ssize_t data_len; + size_t l4len; + int iov_used; + + iov_used = udp_vu_sock_recv(c, from_s, events, v6, &data_len); + if (iov_used <= 0) + return; + flow_trace(uflow, "Received 1 datagram on reply socket"); + uflow->ts = now->tv_sec; + + l4len = udp_vu_prepare(c, toside, data_len); + udp_vu_pcap(c, toside, l4len, iov_used); + vu_send_frame(vdev, vq, elem, iov_vu, iov_used); + } +} diff --git a/udp_vu.h b/udp_vu.h new file mode 100644 index 000000000000..ba7018d3bf01 --- /dev/null +++ b/udp_vu.h @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + */ + +#ifndef UDP_VU_H +#define UDP_VU_H + +void udp_vu_listen_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +void udp_vu_reply_sock_handler(const struct ctx *c, union epoll_ref ref, + uint32_t events, const struct timespec *now); +#endif /* UDP_VU_H */ diff --git a/vhost_user.c b/vhost_user.c index 3b38e06f268e..0f98ee7fa7c3 100644 --- a/vhost_user.c +++ b/vhost_user.c @@ -52,7 +52,6 @@ * this is part of the vhost-user backend * convention. */ -/* cppcheck-suppress unusedFunction */ void vu_print_capabilities(void) { info("{"); @@ -162,9 +161,7 @@ static void vmsg_close_fds(const struct vhost_user_msg *vmsg) */ static void vu_remove_watch(const struct vu_dev *vdev, int fd) { - /* Placeholder to add passt related code */ - (void)vdev; - (void)fd; + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_DEL, fd, NULL); } /** @@ -425,7 +422,6 @@ static bool map_ring(struct vu_dev *vdev, struct vu_virtq *vq) * * Return: 0 if the zone is in a mapped memory region, -1 otherwise */ -/* cppcheck-suppress unusedFunction */ int vu_packet_check_range(void *buf, size_t offset, size_t len, const char *start) { @@ -515,6 +511,14 @@ static bool vu_set_mem_table_exec(struct vu_dev *vdev, } } + /* As vu_packet_check_range() has no access to the number of + * memory regions, mark the end of the array with mmap_addr = 0 + */ + ASSERT(vdev->nregions < VHOST_USER_MAX_RAM_SLOTS - 1); + vdev->regions[vdev->nregions].mmap_addr = 0; + + tap_sock_update_buf(vdev->regions, 0); + return false; } @@ -643,9 +647,12 @@ static bool vu_get_vring_base_exec(struct vu_dev *vdev, */ static void vu_set_watch(const struct vu_dev *vdev, int fd) { - /* Placeholder to add passt related code */ - (void)vdev; - (void)fd; + union epoll_ref ref = { .type = EPOLL_TYPE_VHOST_KICK, .fd = fd }; + struct epoll_event ev = { 0 }; + + ev.data.u64 = ref.u64; + ev.events = EPOLLIN; + epoll_ctl(vdev->context->epollfd, EPOLL_CTL_ADD, fd, &ev); } /** @@ -685,7 +692,6 @@ static int vu_wait_queue(const struct vu_virtq *vq) * * Return: number of bytes sent, -1 if there is an error */ -/* cppcheck-suppress unusedFunction */ int vu_send(struct vu_dev *vdev, const void *buf, size_t size) { struct vu_virtq *vq = &vdev->vq[VHOST_USER_RX_QUEUE]; @@ -869,7 +875,6 @@ static void vu_handle_tx(struct vu_dev *vdev, int index, * @ref: epoll reference information * @now: Current timestamp */ -/* cppcheck-suppress unusedFunction */ void vu_kick_cb(struct vu_dev *vdev, union epoll_ref ref, const struct timespec *now) { @@ -1104,11 +1109,11 @@ static bool vu_set_vring_enable_exec(struct vu_dev *vdev, * @c: execution context * @vdev: vhost-user device */ -/* cppcheck-suppress unusedFunction */ void vu_init(struct ctx *c, struct vu_dev *vdev) { int i; + c->vdev = vdev; vdev->context = c; for (i = 0; i < VHOST_USER_MAX_QUEUES; i++) { vdev->vq[i] = (struct vu_virtq){ @@ -1124,7 +1129,6 @@ void vu_init(struct ctx *c, struct vu_dev *vdev) * vu_cleanup() - Reset vhost-user device * @vdev: vhost-user device */ -/* cppcheck-suppress unusedFunction */ void vu_cleanup(struct vu_dev *vdev) { unsigned int i; @@ -1171,8 +1175,7 @@ void vu_cleanup(struct vu_dev *vdev) */ static void vu_sock_reset(struct vu_dev *vdev) { - /* Placeholder to add passt related code */ - (void)vdev; + tap_sock_reset(vdev->context); } static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, @@ -1200,7 +1203,6 @@ static bool (*vu_handle[VHOST_USER_MAX])(struct vu_dev *vdev, * @fd: vhost-user message socket * @events: epoll events */ -/* cppcheck-suppress unusedFunction */ void vu_control_handler(struct vu_dev *vdev, int fd, uint32_t events) { struct vhost_user_msg msg = { 0 }; diff --git a/virtio.c b/virtio.c index 237395396606..31e56def2c23 100644 --- a/virtio.c +++ b/virtio.c @@ -562,7 +562,6 @@ void vu_queue_unpop(struct vu_virtq *vq) * @vq: Virtqueue * @num: Number of element to unpop */ -/* cppcheck-suppress unusedFunction */ bool vu_queue_rewind(struct vu_virtq *vq, unsigned int num) { if (num > vq->inuse) diff --git a/vu_common.c b/vu_common.c new file mode 100644 index 000000000000..7a9caae17f42 --- /dev/null +++ b/vu_common.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * common_vu.c - vhost-user common UDP and TCP functions + */ + +#include <unistd.h> +#include <sys/uio.h> +#include <linux/virtio_net.h> + +#include "util.h" +#include "passt.h" +#include "vhost_user.h" +#include "vu_common.h" + +/** + * vu_send_frame() - Send one frame to the vhost-user interface + * @vdev: vhost-user device + * @vq: vhost-user virtqueue + * @elem: virtqueue element array to send back to the virqueue + * @iov_vu: iovec array containing the data to send + * @iov_used: Length of the array + */ +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, + struct vu_virtq_element *elem, const struct iovec *iov_vu, + int iov_used) +{ + int i; + + for (i = 0; i < iov_used; i++) + vu_queue_fill(vq, &elem[i], iov_vu[i].iov_len, i); + + vu_queue_flush(vq, iov_used); + vu_queue_notify(vdev, vq); +} diff --git a/vu_common.h b/vu_common.h new file mode 100644 index 000000000000..20950b44493c --- /dev/null +++ b/vu_common.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later + * Copyright Red Hat + * Author: Laurent Vivier <lvivier(a)redhat.com> + * + * vhost-user common UDP and TCP functions + */ + +#ifndef VU_COMMON_H +#define VU_COMMON_H + +static inline void *vu_eth(void *base) +{ + return ((char *)base + sizeof(struct virtio_net_hdr_mrg_rxbuf)); +} + +static inline void *vu_ip(void *base) +{ + return (struct ethhdr *)vu_eth(base) + 1; +} + +static inline void *vu_payloadv4(void *base) +{ + return (struct iphdr *)vu_ip(base) + 1; +} + +static inline void *vu_payloadv6(void *base) +{ + return (struct ipv6hdr *)vu_ip(base) + 1; +} + +void vu_send_frame(const struct vu_dev *vdev, struct vu_virtq *vq, + struct vu_virtq_element *elem, const struct iovec *iov_vu, + int iov_used); +#endif /* VU_COMMON_H */-- Stefano
On Thu, Sep 19, 2024 at 03:51:43PM +0200, Stefano Brivio wrote:Sorry for the delay, I wanted first to finish extending tests to run also functional ones (not just throughput and latency) with vhost-user, but it's taking me a bit longer than expected, so here comes the review. By the way, by mistake I let passt run in non-vhost-user mode while QEMU was configured to use it. This results in a loop: qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: vhost_backend_init failed: Protocol error qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: failed to init vhost_net for queue 0 qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: Failed to read msg header. Read 0 instead of 12. Original request 1. qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: vhost_backend_init failed: Protocol error qemu-system-x86_64: -netdev vhost-user,id=netdev0,chardev=chr0: failed to init vhost_net for queue 0 ... and passt says: accepted connection from PID 4807 Bad frame size from guest, resetting connection Client connection closed accepted connection from PID 4807 Bad frame size from guest, resetting connection Client connection closed ... while happily flooding system logs. I guess it should be fixed in QEMU at some point: if the vhost_net initialisation fails, I don't see the point in retrying. This is without the "reconnect" option by the way:Fwiw, I tend to agree. [snip]If we introduced this as deprecated, then we should also introduce --no-vhost-user immediately (as a no-op). So that people can make future proof scripts that _don't_ want VU right away. Or... Maybe we should start deprecating in terms of a a more general option, say: --backend qemu-stream --backend tuntap --backend vhost-user So we don't need to proliferate even more options, when say we want to add: --backend vduse Or whatever.+.TP +.BR \-\-vhost-userI think we should introduce this option as deprecated right away, so that we can switch to vhost-user mode by default soon (checking if the hypervisor sends us a vhost-user command) without having to keep this option around. At that point, we can add --no-vhost-user instead.If it makes sense, you could copy the text from --stderr: Note that this configuration option is \fBdeprecated\fR and will be removed in a future version.Yeah, I don't think having different epoll types would be particularly helpful here. If we have multiple guests we won't necessarily know which one we'll be forwarding to until we've been through the "forward / NAT" logic.+Enable vhost-user. The vhost-user command socket is provided by \fB--socket\fR. + +.TP +.BR \-\-print-capabilities +Print back-end capabilities in JSON format, only meaningful for vhost-user mode. + .TP .BR \-F ", " \-\-fd " " \fIFD Pass a pre-opened, connected socket to \fBpasst\fR. Usually the socket is opened diff --git a/passt.c b/passt.c index ad6f0bc32df6..b64efeaf346c 100644 --- a/passt.c +++ b/passt.c @@ -74,6 +74,8 @@ char *epoll_type_str[] = { [EPOLL_TYPE_TAP_PASTA] = "/dev/net/tun device", [EPOLL_TYPE_TAP_PASST] = "connected qemu socket", [EPOLL_TYPE_TAP_LISTEN] = "listening qemu socket", + [EPOLL_TYPE_VHOST_CMD] = "vhost-user command socket", + [EPOLL_TYPE_VHOST_KICK] = "vhost-user kick socket", }; static_assert(ARRAY_SIZE(epoll_type_str) == EPOLL_NUM_TYPES, "epoll_type_str[] doesn't match enum epoll_type"); @@ -206,6 +208,7 @@ int main(int argc, char **argv) struct rlimit limit; struct timespec now; struct sigaction sa; + struct vu_dev vdev; clock_gettime(CLOCK_MONOTONIC, &log_start); @@ -262,6 +265,8 @@ int main(int argc, char **argv) pasta_netns_quit_init(&c); tap_sock_init(&c); + if (c.mode == MODE_VU) + vu_init(&c, &vdev); secret_init(&c); @@ -352,14 +357,31 @@ loop: tcp_timer_handler(&c, ref); break; case EPOLL_TYPE_UDP_LISTEN: - udp_listen_sock_handler(&c, ref, eventmask, &now); + if (c.mode == MODE_VU) {Eventually, we'll probably want to make passt more generic and to support multiple guests, so at that point this might become EPOLL_TYPE_UDP_VU_LISTEN if it's a socket we opened for a guest using vhost-user. Or maybe we'll have to unify the receive paths, so this will remain EPOLL_TYPE_UDP_LISTEN.Either way, _if it's more convenient for you right now_, I wouldn't see any issue in defining new EPOLL_TYPE_UDP_VU_{LISTEN,REPLY} values. > + udp_vu_listen_sock_handler(&c, ref, eventmask, > + &now); > + } else { > + udp_buf_listen_sock_handler(&c, ref, eventmask, > + &now); > + } > break; > case EPOLL_TYPE_UDP_REPLY:It could potentially be used for this case, though, since this is associated with a single flow.> - udp_reply_sock_handler(&c, ref, eventmask, &now); > + if (c.mode == MODE_VU) > + udp_vu_reply_sock_handler(&c, ref, eventmask, > + &now); > + else > + udp_buf_reply_sock_handler(&c, ref, eventmask, > + &now); > break;[snip]I'm guessing this was meant to go on the next function down. This might be a content conflict with one of my recent changes: the "TCP cleanups" added "const" to a lot of context pointers through the TCP code, I'm guessing this function was calling one of them, so couldn't be const when Laurent first wrote it.+/** + * tcp_vu_pcap() - Capture a single frame to pcap file (TCP) + * @c: Execution context + * @tapside: Address information for one side of the flow + * @iov: Pointer to the array of IO vectors + * @iov_used: Length of the array + * @l4len: IPv4 Payload length + */ +static void tcp_vu_pcap(const struct ctx *c, const struct flowside *tapside,'c' should be const (unless you modify data pointed by it, but I don't see where), otherwise gcc complains:tcp.c: In function ‘tcp_send_flag’: tcp.c:1249:41: warning: passing argument 1 of ‘tcp_vu_send_flag’ discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers] 1249 | return tcp_vu_send_flag(c, conn, flags); | ^ In file included from tcp.c:307: tcp_vu.h:9:34: note: expected ‘struct ctx *’ but argument is of type ‘const struct ctx *’ 9 | int tcp_vu_send_flag(struct ctx *c, struct tcp_tap_conn *conn, int flags); | ~~~~~~~~~~~~^-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson