diff options
author | Eric Dumazet <edumazet@google.com> | 2015-05-15 12:39:28 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2015-05-17 22:45:48 -0400 |
commit | 8e4d980ac21596a9b91d8e720c77ad081975a0a8 (patch) | |
tree | dc700c270ee5120019b430d2a0d97f8a9c728c03 | |
parent | b8da51ebb1aa93908350f95efae73aecbc2e266c (diff) |
tcp: fix behavior for epoll edge trigger
Under memory pressure, tcp_sendmsg() can fail to queue a packet
while no packet is present in write queue. If we return -EAGAIN
with no packet in write queue, no ACK packet will ever come
to raise EPOLLOUT.
We need to allow one skb per TCP socket, and make sure that
tcp sockets can release their forward allocations under pressure.
This is a followup to commit 790ba4566c1a ("tcp: set SOCK_NOSPACE
under memory pressure")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r-- | net/ipv4/tcp.c | 15 |
1 files changed, 13 insertions, 2 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index ecccfdc50d76..9eabfd3e0925 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -815,9 +815,20 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp) /* The TCP header must be at least 32-bit aligned. */ size = ALIGN(size, 4); + if (unlikely(tcp_under_memory_pressure(sk))) + sk_mem_reclaim_partial(sk); + skb = alloc_skb_fclone(size + sk->sk_prot->max_header, gfp); - if (skb) { - if (sk_wmem_schedule(sk, skb->truesize)) { + if (likely(skb)) { + bool mem_schedule; + + if (skb_queue_len(&sk->sk_write_queue) == 0) { + mem_schedule = true; + sk_forced_mem_schedule(sk, skb->truesize); + } else { + mem_schedule = sk_wmem_schedule(sk, skb->truesize); + } + if (likely(mem_schedule)) { skb_reserve(skb, sk->sk_prot->max_header); /* * Make sure that we have exactly size bytes |