diff options
author | Hangbin Liu <liuhangbin@gmail.com> | 2019-09-06 15:36:01 +0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-09-07 17:49:00 +0200 |
commit | 0079ad8e8dc3a4d1af0dd4a53345580a6947beba (patch) | |
tree | 2b109bf325c4c90497be693745a647c91aca0b5a /Makefile | |
parent | 06be463368774067d100b237145d9d75db9e05af (diff) |
ipmr: remove hard code cache_resolve_queue_len limit
This is a re-post of previous patch wrote by David Miller[1].
Phil Karn reported[2] that on busy networks with lots of unresolved
multicast routing entries, the creation of new multicast group routes
can be extremely slow and unreliable.
The reason is we hard-coded multicast route entries with unresolved source
addresses(cache_resolve_queue_len) to 10. If some multicast route never
resolves and the unresolved source addresses increased, there will
be no ability to create new multicast route cache.
To resolve this issue, we need either add a sysctl entry to make the
cache_resolve_queue_len configurable, or just remove cache_resolve_queue_len
limit directly, as we already have the socket receive queue limits of mrouted
socket, pointed by David.
>From my side, I'd perfer to remove the cache_resolve_queue_len limit instead
of creating two more(IPv4 and IPv6 version) sysctl entry.
[1] https://lkml.org/lkml/2018/7/22/11
[2] https://lkml.org/lkml/2018/7/21/343
v3: instead of remove cache_resolve_queue_len totally, let's only remove
the hard code limit when allocate the unresolved cache, as Eric Dumazet
suggested, so we don't need to re-count it in other places.
v2: hold the mfc_unres_lock while walking the unresolved list in
queue_count(), as Nikolay Aleksandrov remind.
Reported-by: Phil Karn <karn@ka9q.net>
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'Makefile')
0 files changed, 0 insertions, 0 deletions