diff options
author | Daniel Borkmann <daniel@iogearbox.net> | 2018-08-10 16:07:50 +0200 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2018-08-10 16:07:51 +0200 |
commit | c4c20217542469b9caf7f700ac9a2eeb32cb3742 (patch) | |
tree | 445df34e43e9a988597c5ff8f1654d235e3aedb4 /net/core/skbuff.c | |
parent | eb91e4d4db06adef06e7f50c02813c13c6ca5a5b (diff) | |
parent | 1bca4e6b1863c0a006fde6a66720a87823109294 (diff) |
Merge branch 'bpf-sample-cpumap-lb'
Jesper Dangaard Brouer says:
====================
Background: cpumap moves the SKB allocation out of the driver code,
and instead allocate it on the remote CPU, and invokes the regular
kernel network stack with the newly allocated SKB.
The idea behind the XDP CPU redirect feature, is to use XDP as a
load-balancer step in-front of regular kernel network stack. But the
current sample code does not provide a good example of this. Part of
the reason is that, I have implemented this as part of Suricata XDP
load-balancer.
Given this is the most frequent feature request I get. This patchset
implement the same XDP load-balancing as Suricata does, which is a
symmetric hash based on the IP-pairs + L4-protocol.
The expected setup for the use-case is to reduce the number of NIC RX
queues via ethtool (as XDP can handle more per core), and via
smp_affinity assign these RX queues to a set of CPUs, which will be
handling RX packets. The CPUs that runs the regular network stack is
supplied to the sample xdp_redirect_cpu tool by specifying
the --cpu option multiple times on the cmdline.
I do note that cpumap SKB creation is not feature complete yet, and
more work is coming. E.g. given GRO is not implemented yet, do expect
TCP workloads to be slower. My measurements do indicate UDP workloads
are faster.
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'net/core/skbuff.c')
0 files changed, 0 insertions, 0 deletions