On Fri, Dec 20, 2024 at 10:51:33AM +0100, Stefano Brivio wrote:On Fri, 20 Dec 2024 11:59:09 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:Right. Even without my packet pool changes, I'm not aware of a way to make it crash, just ways to cause packets to be dropped when they shouldn't.On Thu, Dec 19, 2024 at 10:00:11AM +0100, Stefano Brivio wrote:Oh, sorry, I was deep in the perspective of trying to make things crash... and it didn't do anything, just accepted the setting and kept sending packets out.On Fri, 13 Dec 2024 23:01:55 +1100 David Gibson <david(a)gibson.dropbear.id.au> wrote:Right, I have draft patches that do basically this.struct pool, which represents a batch of packets includes values giving the buffer in which all the packets lie - or for vhost_user a link to the vu_dev_region array in which the packets sit. Originally that made sense because we stored each packet as an offset and length within that buffer. However dd143e389 ("packet: replace struct desc by struct iovec") replaced the offset and length with a struct iovec which can directly reference a packet anywhere in memory. This means we no longer need the buffer reference to interpret packets from the pool. So there's really no need to check where the packet sits. We can remove the buf reference and all checks associated with it. As a bonus this removes the special case for vhost-user. Similarly the old representation used a 16-bit length, so there were some checks that packets didn't exceed that. That's also no longer necessary with the struct iovec which uses a size_t length. I think under an unlikely set of circumstances it might have been possible to hit that 16-bit limit for a legitimate packet: other parts of the code place a limit of 65535 bytes on the L2 frame, however that doesn't include the length tag used by the qemu socket protocol. That tag *is* included in the packet as stored in the pool, however, meaning we could get a 65539 byte packet at this level.As I mentioned in the call on Monday: sure, we need to fix this, but at the same time I'm not quite convinced that it's a good idea to drop all these sanity checks. Even if they're not based on offsets anymore, I think it's still valuable to ensure that the packets are not exactly _anywhere_ in memory, but only where we expect them to be. If it's doable, I would rather keep these checks, and change the ones on the length to allow a maximum value of 65539 bytes. I mean, there's a big difference between 65539 and, say, 4294967296.By the way, I haven't checked what happens with MTUs slightly bigger than 65520 bytes: virtio-net (at least with QEMU) doesn't budge if I set more than 65520, but I didn't actually send big packets. I'll try to have a look (also with muvm) unless you already checked.I'm not sure what you mean by "doesn't budge". No, I haven't checked with either qemu or muvm. There could of course be limits applied by either VMM, or by the guest virtio-net driver.Let me try that then, with and without your new series...-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson