diff options
author | Dongli Zhang <dongli.zhang@oracle.com> | 2019-09-16 11:46:59 +0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-09-16 21:46:22 +0200 |
commit | 00b368502d18f790ab715e055869fd4bb7484a9b (patch) | |
tree | 3d9be0a6c4ab99646eb5b36c57d980d2fbab7d05 /net/core | |
parent | a53651ec93a8d7ab5b26c5390e0c389048b4b4b6 (diff) |
xen-netfront: do not assume sk_buff_head list is empty in error handling
When skb_shinfo(skb) is not able to cache extra fragment (that is,
skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS), xennet_fill_frags() assumes
the sk_buff_head list is already empty. As a result, cons is increased only
by 1 and returns to error handling path in xennet_poll().
However, if the sk_buff_head list is not empty, queue->rx.rsp_cons may be
set incorrectly. That is, queue->rx.rsp_cons would point to the rx ring
buffer entries whose queue->rx_skbs[i] and queue->grant_rx_ref[i] are
already cleared to NULL. This leads to NULL pointer access in the next
iteration to process rx ring buffer entries.
Below is how xennet_poll() does error handling. All remaining entries in
tmpq are accounted to queue->rx.rsp_cons without assuming how many
outstanding skbs are remained in the list.
985 static int xennet_poll(struct napi_struct *napi, int budget)
... ...
1032 if (unlikely(xennet_set_skb_gso(skb, gso))) {
1033 __skb_queue_head(&tmpq, skb);
1034 queue->rx.rsp_cons += skb_queue_len(&tmpq);
1035 goto err;
1036 }
It is better to always have the error handling in the same way.
Fixes: ad4f15dc2c70 ("xen/netfront: don't bug in case of too many frags")
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core')
0 files changed, 0 insertions, 0 deletions