summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorArjun Roy <arjunroy@google.com>2020-06-07 18:54:41 -0700
committerDavid S. Miller <davem@davemloft.net>2020-06-08 19:08:17 -0700
commit3763a24c727ecf236358a81ee749e5fcab1c972a (patch)
tree9f841e8d82490893edf6d827a4a9bbc53f647d76 /Documentation
parent0e6fbe39bdf71b4e665767bcbf53567a3e6d0623 (diff)
net-zerocopy: use vm_insert_pages() for tcp rcv zerocopy
Use vm_insert_pages() for tcp receive zerocopy. Spin lock cycles (as reported by perf) drop from a couple of percentage points to a fraction of a percent. This results in a roughly 6% increase in efficiency, measured roughly as zerocopy receive count divided by CPU utilization. The intention of this patchset is to reduce atomic ops for tcp zerocopy receives, which normally hits the same spinlock multiple times consecutively. [akpm@linux-foundation.org: suppress gcc-7.2.0 warning] Link: http://lkml.kernel.org/r/20200128025958.43490-3-arjunroy.kdev@gmail.com Signed-off-by: Arjun Roy <arjunroy@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Cc: David Miller <davem@davemloft.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'Documentation')
0 files changed, 0 insertions, 0 deletions