On Thu, Jun 27, 2024 at 10:46:45PM +0200, Stefano Brivio wrote:On Thu, 27 Jun 2024 09:55:46 +0200 Stefano Brivio <sbrivio(a)redhat.com> wrote:I'm not talking about open-coding a general overflow checking add/sub, I'm talking about checking the much tighter range we actually have. That should let Coverity reason that an overflow isn't possible, and also check for other possible weirdness.On Thu, 27 Jun 2024 11:13:14 +1000 David Gibson <david(a)gibson.dropbear.id.au> wrote:I tried, but in most cases, not even open-coding the whole thing, as an ASSERT() or an early return, say: if ((rc > 0 && lr->count < LONG_MIN + rc) || (rc < 0 && lr->count > LONG_MAX + rc)) return -1;On Thu, Jun 27, 2024 at 01:45:35AM +0200, Stefano Brivio wrote:No, that was exactly my point: return values are constrained by the kernel, but a static checker doesn't necessarily have to assume a kernel that's properly functioning. In general, static checkers do, especially if POSIX has a clear definition of a system call, and for what matters to us, they should. But here Coverity is ignoring that, and I'm not sure we should call it a false positive. It's kind of arbitrary really. I think Coverity in these cases just prefers to "blindly" apply CERT C INT32-C locally, which is not necessarily a bad choice, because "false positives" are not so much of a nuisance.Potential sum and subtraction overflows were reported by Coverity in a few places where we use size_t and ssize_t types. Strictly speaking, those are not false positives even if I don't see a way to trigger those: for instance, if we receive up to n bytes from a socket up, the return value from recv() is already constrained and can't overflow for the usage we make in tap_handler_passt().Actually, I think they are false positives. In a bunch of cases the reasoning for that does rely on assuming the kernel will never return a value greater than the buffer size for read(), write() or similar.So possibly just ASSERT()ing that would suffice.In some cases yes, but as we have built-ins in gcc and Clang that aim at keeping the cost of the checks down by, quoting gcc documentation, using "hardware instructions to implement these built-in functions where possible", and they already implement the operation, open-coding our own checks for assertions looks redundant and might result in slower code.or its ASSERT() equivalent is enough. That's because, without using built-in functions, we can't have (of course) "infinite precision" types. From: https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html These built-in functions promote the first two operands into infinite precision signed type and perform addition on those promoted operands And it looks like Coverity wants to see operations executed in those types: size_t or unsigned long long is not enough. In two cases, I could make Coverity happy with some rather bulky asserts, if I turn operations into size_t plus ssize_t, in size_t domain. But the result looks really bulky.-- David Gibson (he or they) | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you, not the other way | around. http://www.ozlabs.org/~dgibson