On Tue, Feb 13, 2024 at 2:02 PM Paolo Abeni
On Tue, 2024-02-13 at 13:24 +0100, Eric Dumazet wrote:
On Tue, Feb 13, 2024 at 11:49 AM Paolo Abeni
wrote: @@ -2508,7 +2508,10 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, WRITE_ONCE(*seq, *seq + used); copied += used; len -= used; - + if (flags & MSG_PEEK) + sk_peek_offset_fwd(sk, used); + else + sk_peek_offset_bwd(sk, used);
Yet another cache miss in TCP fast path...
We need to move sk_peek_off in a better location before we accept this patch.
I always thought MSK_PEEK was very inefficient, I am surprised we allow arbitrary loops in recvmsg().
Let me double check I read the above correctly: are you concerned by the 'skb_queue_walk(&sk->sk_receive_queue, skb) {' loop that could touch a lot of skbs/cachelines before reaching the relevant skb?
The end goal here is allowing an user-space application to read incrementally/sequentially the received data while leaving them in receive buffer.
I don't see a better option than MSG_PEEK, am I missing something?
This sk_peek_offset protocol, needing sk_peek_offset_bwd() in the non MSG_PEEK case is very strange IMO. Ideally, we should read/write over sk_peek_offset only when MSG_PEEK is used by the caller. That would only touch non fast paths. Since the API is mono-threaded anyway, the caller should not rely on the fact that normal recvmsg() call would 'consume' sk_peek_offset.