diff options
author | David S. Miller <davem@davemloft.net> | 2017-09-19 16:32:24 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-09-19 16:32:24 -0700 |
commit | 8ca712c373a462cfa1b62272870b6c2c74aa83f9 (patch) | |
tree | 484826d20103aa296586883918c1ba6a3e74ef0b /kernel | |
parent | 752fbcc33405d6f8249465e4b2c4e420091bb825 (diff) | |
parent | 64bc17811b72758753e2b64cd8f2a63812c61fe1 (diff) |
Merge branch 'net-speedup-netns-create-delete-time'
Eric Dumazet says:
====================
net: speedup netns create/delete time
When rate of netns creation/deletion is high enough,
we observe softlockups in cleanup_net() caused by huge list
of netns and way too many rcu_barrier() calls.
This patch series does some optimizations in kobject,
and add batching to tunnels so that netns dismantles are
less costly.
IPv6 addrlabels also get a per netns list, and tcp_metrics
also benefit from batch flushing.
This gives me one order of magnitude gain.
(~50 ms -> ~5 ms for one netns create/delete pair)
Tested:
for i in `seq 1 40`
do
(for j in `seq 1 100` ; do unshare -n /bin/true >/dev/null ; done) &
done
wait ; grep net_namespace /proc/slabinfo
Before patch series :
$ time ./add_del_unshare.sh
net_namespace 116 258 5504 1 2 : tunables 8 4 0 : slabdata 116 258 0
real 3m24.910s
user 0m0.747s
sys 0m43.162s
After :
$ time ./add_del_unshare.sh
net_namespace 135 291 5504 1 2 : tunables 8 4 0 : slabdata 135 291 0
real 0m22.117s
user 0m0.728s
sys 0m35.328s
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel')
0 files changed, 0 insertions, 0 deletions