diff options
author | Eric Dumazet <edumazet@google.com> | 2016-12-08 11:41:56 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-12-09 22:12:21 -0500 |
commit | 6b229cf77d683f634f0edd876c6d1015402303ad (patch) | |
tree | ec877d3e324da74f7bbb179d4cd4a8db0172865c /net/ipv4 | |
parent | c84d949057cab262b4d3110ead9a42a58c2958f7 (diff) |
udp: add batching to udp_rmem_release()
If udp_recvmsg() constantly releases sk_rmem_alloc
for every read packet, it gives opportunity for
producers to immediately grab spinlocks and desperatly
try adding another packet, causing false sharing.
We can add a simple heuristic to give the signal
by batches of ~25 % of the queue capacity.
This patch considerably increases performance under
flood by about 50 %, since the thread draining the queue
is no longer slowed by false sharing.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4')
-rw-r--r-- | net/ipv4/udp.c | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index c608334d99aa..5a38faa12cde 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1177,8 +1177,20 @@ out: /* fully reclaim rmem/fwd memory allocated for skb */ static void udp_rmem_release(struct sock *sk, int size, int partial) { + struct udp_sock *up = udp_sk(sk); int amt; + if (likely(partial)) { + up->forward_deficit += size; + size = up->forward_deficit; + if (size < (sk->sk_rcvbuf >> 2) && + !skb_queue_empty(&sk->sk_receive_queue)) + return; + } else { + size += up->forward_deficit; + } + up->forward_deficit = 0; + atomic_sub(size, &sk->sk_rmem_alloc); sk->sk_forward_alloc += size; amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1); |