summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-02-15igb: Merge VLVF configuration into igb_vfta_setAlexander Duyck
This change makes it so that we can merge the configuration of the VLVF registers into the setting of the VFTA register. By doing this we simplify the logic and make use of similar functionality that we have already added for ixgbe making it easier to maintain both drivers. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: Always enable VLAN 0 even if 8021q is not loadedAlexander Duyck
This patch makes it so that we always add VLAN 0. This is important as we need to guarantee the PF can receive untagged frames in the case of SR-IOV being enabled but VLAN filtering not being enabled in the kernel. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: Do not factor VLANs into RLPML calculationAlexander Duyck
The RLPML registers already take the size of VLAN headers into account when determining the maximum packet length. This is called out in EAS documents for several parts including the 82576 and the i350. As such we can drop the addition of size to the value programmed into the RLPML registers. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: Allow asymmetric configuration of MTU versus Rx frame sizeAlexander Duyck
Since the igb driver is using page based receive there is no point in limiting the Rx capabilities of the device. The driver can receive 9K jumbo frames at all times. The only changes needed due to MTU changes are updates for the FIFO sizes and flow-control watermarks. Update the maximum frame size to reflect the 9.5K limitation of the hardware, and replace all instances of max_frame_size with MAX_JUMBO_FRAME_SIZE when referring to an Rx FIFO or frame. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: Refactor VFTA configurationAlexander Duyck
This patch starts the clean-up process on the VFTA configuration. Specifically in this patch I attempt to address and simplify several items while also updating the code to bring it more inline with what is already in ixgbe. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: clean up code for setting MAC addressAlexander Duyck
Drop a bunch of hand written byte swapping code in favor of just doing the byte swapping ourselves. The registers are little endian registers storing a big endian value so if we read the MAC address array as little endian then we will get the CPU registers into the proper layout. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb/igbvf: don't give upMitch Williams
The driver shouldn't just give up if it fails to get the hardware mailbox lock. This can happen in a situation where the PF-VF communication channel is heavily loaded and causes complete communications failure between the PF and VF drivers. Add a counter and a delay. The driver will now retry ten times, waiting one millisecond between retries. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: Unpair the queues when changing the number of queuesShota Suzuki
By the commit 72ddef0506da ("igb: Fix oops caused by missing queue pairing"), the IGB_FLAG_QUEUE_PAIRS flag can now be set when changing the number of queues by "ethtool -L", but it is never cleared unless the igb driver is reloaded. This patch clears it if queue pairing becomes unnecessary as a result of "ethtool -L". Signed-off-by: Shota Suzuki <suzuki_shota_t3@lab.ntt.co.jp> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-15igb: Remove unnecessary flag setting in igb_set_flag_queue_pairs()Shota Suzuki
If VFs are enabled (max_vfs >= 1), both max_rss_queues and adapter->rss_queues are set to 2 in the case of e1000_82576. In this case, IGB_FLAG_QUEUE_PAIRS is always set in the default block as a result of fall-through, thus setting it in the e1000_82576 block is not necessary. Signed-off-by: Shota Suzuki <suzuki_shota_t3@lab.ntt.co.jp> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2016-02-12Merge branch 'local-checksum-offload'David S. Miller
Edward Cree says: ==================== Local Checksum Offload Re-tested VxLAN; everything else is unchanged from v4. Changes from v4: * Rebased series to fix conflicts with vxlan/vxlan6 merge. Changes from v3: * Fixed inverted checksum values introduced in v3. * Don't mangle zero checksums in GRE. * Clear skb->encapsulation in iptunnel_handle_offloads when not using CHECKSUM_PARTIAL, lest drivers incorrectly interpret that as a request for inner checksum offload. Changes from v2: * Added support for IPv4 GRE. * Split out 'always set up for checksum offload' into its own patch. * Removed csum_help from iptunnel_handle_offloads. * Rewrote LCO callers to only fold once. * Simplified nocheck handling. Changes from v1: * Enabled support in more encapsulation protocols. I think it now covers everything except GRE. * Wrote up some documentation covering TX checksum offload, LCO and RCO. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12Documentation/networking: add checksum-offloads.txt to explain LCOEdward Cree
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12net: ip_tunnel: remove 'csum_help' argument to iptunnel_handle_offloadsEdward Cree
All users now pass false, so we can remove it, and remove the code that was conditional upon it. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12net: gre: Implement LCO for GRE over IPv4Edward Cree
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12fou: enable LCO in FOU and GUEEdward Cree
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12net: vxlan: enable local checksum offloadEdward Cree
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12net: enable LCO for udp_tunnel_handle_offloads() usersEdward Cree
The only protocol affected at present is Geneve. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12net: udp: always set up for CHECKSUM_PARTIAL offloadEdward Cree
If the dst device doesn't support it, it'll get fixed up later anyway by validate_xmit_skb(). Also, this allows us to take advantage of LCO to avoid summing the payload multiple times. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12net: local checksum offload for encapsulationEdward Cree
The arithmetic properties of the ones-complement checksum mean that a correctly checksummed inner packet, including its checksum, has a ones complement sum depending only on whatever value was used to initialise the checksum field before checksumming (in the case of TCP and UDP, this is the ones complement sum of the pseudo header, complemented). Consequently, if we are going to offload the inner checksum with CHECKSUM_PARTIAL, we can compute the outer checksum based only on the packed data not covered by the inner checksum, and the initial value of the inner checksum field. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12Merge branch 'tcp_dccp_ports'David S. Miller
Eric Dumazet says: ==================== tcp/dccp: better use of ephemeral ports Big servers have bloated bind table, making very hard to succeed ephemeral port allocations, without special containers/namespace tricks. This patch series extends the strategy added in commit 07f4c90062f8 ("tcp/dccp: try to not exhaust ip_local_port_range in connect()"). Since ports used by connect() are much likely to be shared among them, we give a hint to both bind() and connect() to keep the crowds separated if possible. Of course, if on a specific host an application needs to allocate ~30000 ports using bind(), it will still be able to do so. Same for ~30000 connect() to a unique 2-tuple (dst addr, dst port) New implemetation is also more friendly to softirqs and reschedules. v2: rebase after TCP SO_REUSEPORT changes ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12tcp/dccp: better use of ephemeral ports in bind()Eric Dumazet
Implement strategy used in __inet_hash_connect() in opposite way : Try to find a candidate using odd ports, then fallback to even ports. We no longer disable BH for whole traversal, but one bucket at a time. We also use cond_resched() to yield cpu to other tasks if needed. I removed one indentation level and tried to mirror the loop we have in __inet_hash_connect() and variable names to ease code maintenance. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-12tcp/dccp: better use of ephemeral ports in connect()Eric Dumazet
In commit 07f4c90062f8 ("tcp/dccp: try to not exhaust ip_local_port_range in connect()"), I added a very simple heuristic, so that we got better chances to use even ports, and allow bind() users to have more available slots. It gave nice results, but with more than 200,000 TCP sessions on a typical server, the ~30,000 ephemeral ports are still a rare resource. I chose to go a step further, by looking at all even ports, and if none was available, fallback to odd ports. The companion patch does the same in bind(), but in opposite way. I've seen exec times of up to 30ms on busy servers, so I no longer disable BH for the whole traversal, but only for each hash bucket. I also call cond_resched() to be gentle to other tasks. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11Merge branch 'net-mitigate-kmem_free-slowpath'David S. Miller
Jesper Dangaard Brouer says: ==================== net: mitigating kmem_cache free slowpath This patchset is the first real use-case for kmem_cache bulk _free_. The use of bulk _alloc_ is NOT included in this patchset. The full use have previously been posted here [1]. The bulk free side have the largest benefit for the network stack use-case, because network stack is hitting the kmem_cache/SLUB slowpath when freeing SKBs, due to the amount of outstanding SKBs. This is solved by using the new API kmem_cache_free_bulk(). Introduce new API napi_consume_skb(), that hides/handles bulk freeing for the caller. The drivers simply need to use this call when freeing SKBs in NAPI context, e.g. replacing their calles to dev_kfree_skb() / dev_consume_skb_any(). Driver ixgbe is the first user of this new API. [1] http://thread.gmane.org/gmane.linux.network/384302/focus=397373 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11ixgbe: bulk free SKBs during TX completion cleanup cycleJesper Dangaard Brouer
There is an opportunity to bulk free SKBs during reclaiming of resources after DMA transmit completes in ixgbe_clean_tx_irq. Thus, bulk freeing at this point does not introduce any added latency. Simply use napi_consume_skb() which were recently introduced. The napi_budget parameter is needed by napi_consume_skb() to detect if it is called from netpoll. Benchmarking IPv4-forwarding, on CPU i7-4790K @4.2GHz (no turbo boost) Single CPU/flow numbers: before: 1982144 pps -> after : 2064446 pps Improvement: +82302 pps, -20 nanosec, +4.1% (SLUB and GCC version 5.1.1 20150618 (Red Hat 5.1.1-4)) Joint work with Alexander Duyck. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: bulk free SKBs that were delay free'ed due to IRQ contextJesper Dangaard Brouer
The network stack defers SKBs free, in-case free happens in IRQ or when IRQs are disabled. This happens in __dev_kfree_skb_irq() that writes SKBs that were free'ed during IRQ to the softirq completion queue (softnet_data.completion_queue). These SKBs are naturally delayed, and cleaned up during NET_TX_SOFTIRQ in function net_tx_action(). Take advantage of this a use the skb defer and flush API, as we are already in softirq context. For modern drivers this rarely happens. Although most drivers do call dev_kfree_skb_any(), which detects the situation and calls __dev_kfree_skb_irq() when needed. This due to netpoll can call from IRQ context. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: bulk free infrastructure for NAPI context, use napi_consume_skbJesper Dangaard Brouer
Discovered that network stack were hitting the kmem_cache/SLUB slowpath when freeing SKBs. Doing bulk free with kmem_cache_free_bulk can speedup this slowpath. NAPI context is a bit special, lets take advantage of that for bulk free'ing SKBs. In NAPI context we are running in softirq, which gives us certain protection. A softirq can run on several CPUs at once. BUT the important part is a softirq will never preempt another softirq running on the same CPU. This gives us the opportunity to access per-cpu variables in softirq context. Extend napi_alloc_cache (before only contained page_frag_cache) to be a struct with a small array based stack for holding SKBs. Introduce a SKB defer and flush API for accessing this. Introduce napi_consume_skb() as replacement for e.g. dev_consume_skb_any() when running in NAPI context. A small trick to handle/detect if we are called from netpoll is to see if budget is 0. In that case, we need to invoke dev_consume_skb_irq(). Joint work with Alexander Duyck. Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11Merge branch 'virtio_net-ethtool-validation'David S. Miller
Nikolay Aleksandrov says: ==================== virtio_net: better ethtool setting validation This small set is a follow-up for the recent patches that added ethtool get/set settings. Patch 1 changes the speed validation routine to check if the speed is between 0 and INT_MAX (or SPEED_UNKNOWN) and patch 2 adds port validation to virtio_net and better validation comment. This set is on top of Michael's patch which explains that speeds from 0 to INT_MAX are valid: http://patchwork.ozlabs.org/patch/578911/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11virtio_net: validate ethtool port setting and explain the user validationNikolay Aleksandrov
We should validate the port setting that we got from the user and check if it's what we've set it to (PORT_OTHER), also add explanation that ignoring advertising is good as long as we don't have autonegotiation. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11ethtool: make validate_speed accept all speeds between 0 and INT_MAXNikolay Aleksandrov
Devices these days can have any speed and as was recently pointed out any speed from 0 to INT_MAX is valid so adjust speed validation to accept such values. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11Merge branch 'dp83848-TLK10x'David S. Miller
Andrew F. Davis says: ==================== net: phy: dp83848: Add support for TI TLK10x Ethernet PHYs This series is [0] split into its logical components. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: dp83848: Add comments for register definitionsAndrew F. Davis
Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: dp83848: Add support for TI TLK10x Ethernet PHYsAndrew F. Davis
The TI TLK10x Ethernet PHYs are similar in the interrupt relevant registers and so are compatible with the DP83848x devices already supported. Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: dp83848: Reorganize code for readability and safetyAndrew F. Davis
Reorganize code by moving the desired interrupt mask definition out of function. Also rearrange the enable/disable interrupt function to prevent accidental over-writing of values in registers. Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: dp83848: Add PHY ID for TI version of DP83848CAndrew F. Davis
After acquiring National Semiconductor, TI appears to have changed the Vendor Model Number for the DP83848C PHYs, add this new ID to supported IDs. Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: dp83848: Add macro for dp83848 compatible devicesAndrew F. Davis
Add a helper macro for defining dp83848 compatible phy devices. Update copyright info. Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11be2net: don't report EVB for older chipsets when SR-IOV is disabledIvan Vecera
The EVB (virtual bridge) functionality should be disabled on older BE3 and Lancer chips if SR-IOV is disabled in the NIC's BIOS. This setting is identified by the zero value of total VFs reported by the card. The GET_HSW_CONFIG command cannot be used as it is not supported by these older chipset's FW. v2: added the comment Cc: Sathya Perla <sathya.perla@broadcom.com> Cc: Ajit Khaparde <ajit.khaparde@broadcom.com> Cc: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com> Cc: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Cc: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Ivan Vecera <ivecera@redhat.com> Acked-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11Merge branch 'spi_ks8995'David S. Miller
Helmut Buchsbaum says: ==================== Add support for MICREL KSZ8795CLX 5-port switch This patch series refactors the spi-ks8995 driver to finally add support for the MICREL KSZ8795CLX. Additionally support for controlling a GPIO line for resetting the switch is added. Helmut Changes since v2: - use GPIO_ACTIVE_LOW according to Andrew's remark. - use ePAPR compliant node name in example, thanks to Sergei for pointing out Changes since v1: - removed initializing registers from Device Tree following Florian's advice - fixed GPIO handling for reset according to Andrew's remark. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11dt-bindings: net: ks8995: add bindings documentation for ks8995Helmut Buchsbaum
Signed-off-by: Helmut Buchsbaum <helmut.buchsbaum@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: spi_ks8995: add support for MICREL KSZ8795CLXHelmut Buchsbaum
Add support for MICREL KSZ8795CLX Integrated 5-Port, 10-/100-Managed Ethernet Switch with Gigabit GMII/RGMII and MII/RMII interfaces. Signed-off-by: Helmut Buchsbaum <helmut.buchsbaum@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: spi_ks8995: generalize creation of SPI commandsHelmut Buchsbaum
Prepare creating SPI reads and writes for other switch families. The KS8995 family uses the straight forward <8bit CMD><8bit ADDR> sequence. To be able to support KSZ8795 family, which uses <3bit CMD><12bit ADDR><1 bit TR> make the SPI command creation chip variant dependent. Signed-off-by: Helmut Buchsbaum <helmut.buchsbaum@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: spi_ks8995: add support for resetting switch using GPIOHelmut Buchsbaum
When using device tree it is no more possible to reset the PHY at board level. Furthermore, doing in the driver allows to power down the switch when it is not used any more. The patch introduces a new optional property "reset-gpios" denoting an appropriate GPIO handle, e.g.: reset-gpios = <&gpio0 46 GPIO_ACTIVE_LOW> Signed-off-by: Helmut Buchsbaum <helmut.buchsbaum@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: spi_ks8995: verify chip and determine revisionHelmut Buchsbaum
Since the chip variant is now determined by spi_device_id, verify family and chip id and determine the revision id. Signed-off-by: Helmut Buchsbaum <helmut.buchsbaum@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: phy: spi_ks8995: introduce spi_device_id tableHelmut Buchsbaum
Refactor to use spi_device_id table to facilitate easy extendability. Signed-off-by: Helmut Buchsbaum <helmut.buchsbaum@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11Merge branch 'thunderx-irq-hints'David S. Miller
Sunil Goutham says: ==================== net: thunderx: Setting IRQ affinity hints and other optimizations This patch series contains changes - To add support for virtual function's irq affinity hint - Replace napi_schedule() with napi_schedule_irqoff() - Reduce page allocation overhead by allocating pages of higher order when pagesize is 4KB. - Add couple of stats which helps in debugging - Some miscellaneous changes to BGX driver. Changes from v1: - As suggested changed MAC address invalid log message to dev_err() instead of dev_warn(). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: thunderx: Alloc higher order pages when pagesize is smallSunil Goutham
Allocate higher order pages when pagesize is small, this will reduce number of calls to page allocator and wastage of memory. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: thunderx: bgx: Add log message when setting mac addressRobert Richter
Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: thunderx: bgx: Use standard firmware node infrastructure.David Daney
In the case of OF device tree, the firmware information is attached to the BGX device structure in the standard manner, so use the firmware iterators and accessors where possible. Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: thunderx: Assign affinity hints to vf's interruptsSunil Goutham
This affinity hint can be used by user space irqbalance tool to set preferred CPU mask for irqs registered by this VF. Irqbalance needs to be in 'exact' mode to set irq affinity same as indicated by affinity hint. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net: thunderx: Use napi_schedule_irqoff()Sunil Goutham
napi_schedule is being called from hard irq context, hence switch to napi_schedule_irqoff which avoids unneeded call to local_irq_save and local_irq_restore. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11net, thunderx: Add TX timeout and RX buffer alloc failure stats.Thanneeru Srinivasulu
When system is low on atomic memory, too many error messages are logged. Since this is not a total failure but a simple switch to non-atomic allocation better to have a stat. Also add a stat for reset, kicked due to transmit watchdog timeout. Signed-off-by: Thanneeru Srinivasulu <tsrinivasulu@caviumnetworks.com> Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-11Merge branch 'igmp-ns'David S. Miller
Nikolay Borisov says: ==================== Make igmp sysctl knobs namespace aware This series continue making more of the net related sysctls namespace aware. The first 2 and last patches are straight forward and convert sysctls which weren't defined to be namespace aware. The only thing in them is that each removes a define which is used in only one place (to initialise the respective sysctl) so I don't think this is a huge loss. The third patch however, converts igmp_llm_reports which was already defined in the ipv4_net_table but wasn't using any of the net namespace infrastructure. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>