Age | Commit message (Collapse) | Author |
|
Currently the ACL rules can be accessed only by hashing. In order to
dump the activity the rules are also placed in a list.
Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for retrieving TCAM entry activity. In order to support ACL
rule activity corresponding TCAM entry should be queried.
Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add support for allocating generic flow counter. Generic flow counter
can count packets or packets and bytes and can be assigned to different
hardware processes. First use will be for counting packets and bytes of
ACL rules, and will be introduced in the following patches.
Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Reviewed-by: Ido schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The MGPC register retrieves generic flow counter value. It will be
used to query ACL counters.
Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add implementation for counter allocator. The ASIC has special memory
pool for various counting purposes. Counter memory is distributed between
equal size banks.
The static sub-pool configuration should specify the following parameters
for each sub-pool:
- Number of required banks.
- Maximum entry size.
Each module can add dedicated sub-pool or use existing one.
Signed-off-by: Arkadi Sharshevsky <arkadis@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Support prepending data from XDP. We are already always allocating
some headroom because FW may prepend metadata to packets.
xdp_adjust_head() can be supported by making sure that headroom is
big enough for XDP. In case FW had prepended metadata to the packet,
however, we have to move it out of the way before we call XDP.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
XDP may require us to move metadata to make room for pushing
headers. Track meta data location with a pointer and pass
it explicitly to functions.
While at it validate that meta_len from the descriptor is not
bogus.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Rename pkt_off variable to dma_off, it should hold data offset
counting from beginning of DMA mapping. Compute the value only
in XDP context.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
NFP_NET_CFG_RX_OFFSET is 32bit wide, make sure what we read from
there is reasonable for packet headroom. This allows us to store
the rx_offset in a 8bit variable.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Instead of testing if xdp_prog is present store the dma direction
in data path structure.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Instead of passing around sets of rings and their parameters just
store all information in the data path structure.
We will no longer user xchg() on XDP programs when we swap programs
while the traffic is guaranteed not to be flowing. This allows us
to simply assign the entire data path structures instead of copying
field by field.
The optimization to reallocate only the rings on the side (RX/TX)
which has been changed is also removed since it seems like it's not
worth the code complexity.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use xdp_prog member of data path struct to carry the xdp_prog to
alloc/free free functions.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Move the mtu member from ring set to data path struct.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use fl_bufsz member of data path struct to carry desired size of
free list entries.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Instead of passing variables around use dp to store number of tx rings
for the stack and number of IRQ vectors.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Make callers of nfp_net_ring_reconfig() pass newly allocated data
path structure. We will gradually make use of that structure
instead of passing parameters around to all the allocation functions.
This commit adds allocation and propagation of new data path struct,
no parameters are converted, yet.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Control BAR pointer is used to unmask interrupts so it should be
in the first cacheline of adapter structure.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Move all data path information into a separate structure. This way
we will be able to allocate new data path with all new rings etc.
and swap it in easily.
No functional changes.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds the configuration of the AVB Credit-Based Shaper.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch prepares mac debug dump for multiple queues.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch prepares mac irq status treatment for multiple queues.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adapts flow_ctrl function to prepare it for multiple queues.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds the functionality of RX queue to dma channel mapping
based on configuration.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch introduces the enabling of RX queues as DCB or as AVB based
on configuration.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds TX queues weight programming.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds the RX and TX scheduling algorithms programming.
It introduces the multiple queues configuration function
(stmmac_mtl_configuration) in stmmac_main.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch adds the multiple queues configuration in the Device Tree.
It was also created a set of structures to keep the RX and TX queues
configurations to be used in the driver.
Signed-off-by: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The NVIDIA Tegra186 SoC contains an instance of the Synopsys DWC
ethernet QOS IP core. The binding that it uses is slightly different
from existing ones because of the integration (clocks, resets, ...).
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Split out the binding specific parts of ->probe() and ->remove() to
enable the driver to support variants of the binding. This is useful in
order to keep backwards-compatibility while making it easy for a sub-
driver to deal only with the updated bindings rather than having to add
compatibility quirks all over the place.
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Reviewed-By: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Program the receive queue size based on the RX FIFO size and enable
hardware flow control for large FIFOs.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
New version of this core encode the FIFO sizes in one of the feature
registers. Use these sizes as default, but still allow device tree to
override them for backwards compatibility.
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When DMA mapping an SKB fragment, the mapping must be checked for
errors, otherwise the DMA debug code will complain upon unmap.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
clk_prepare_enable() and clk_disable_unprepare() for this clock aren't
properly balanced, which can trigger a WARN_ON() in the common clock
framework.
Reviewed-By: Joao Pinto <jpinto@synopsys.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If an error occurs while opening the device, make sure to disable the
PTP reference clock.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If an error occurs while opening the device, make sure that both the TX
timer and the PHY are properly cleaned up.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There aren't currently any users of the "clk_ptp_ref", but there are
other references to "ptp_ref", so I'm leaning towards considering that a
typo. Fix it.
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: devicetree@vger.kernel.org
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When sending a L3 miss, the family is set to AF_INET even for IPv6. This
causes userland (eg "ip monitor") to be confused. Ensure we send the
appropriate family in this case. For L2 miss, keep using AF_INET.
Signed-off-by: Vincent Bernat <vincent@bernat.im>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Optimize DMA in NUMA systems by allocating memory from NUMA node that NIC
is plugged in to; DMA will no longer cross NUMA nodes. If NIC IRQs are
pinned to a local CPU, that CPU's access to the DMA'd data is also
optimized.
Signed-off-by: VSR Burru <veerasenareddy.burru@cavium.com>
Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com>
Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@cavium.com>
Signed-off-by: Satanand Burla <satananda.burla@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The code in netvsc_device_remove was incorrectly calling napi_disable
repeatedly on the same element. This would cause attempts
to remove netvsc module to hang.
Fixes: 2506b1dc4bbe ("netvsc: implement NAPI")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since rndis_halt_device waits until all outstanding sends and
receives are completed. Netvsc device needs to still schedule
NAPI to see those completions.
Fixes: 2506b1dc4bbe ("netvsc: implement NAPI")
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Madalin Bucur says:
====================
QorIQ DPAA 1 updates
This patch set introduces a series of fixes and features to the DPAA 1
drivers. Besides activating hardware Rx checksum offloading, four traffic
classes are added for Tx traffic prioritisation.
changes from v1: added patch to enable context-A stashing
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
As I don't have the hardware, I'd be very pleased if
someone may test this patch.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Tested-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
As I don't have the hardware, I'd be very pleased if
someone may test this patch.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
As I don't have the hardware, I'd be very pleased if
someone may test this patch.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
As I don't have the hardware, I'd be very pleased if
someone may test this patch.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
As I don't have the hardware, I'd be very pleased if
someone may test this patch.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
As I don't have the hardware, I'd be very pleased if
someone may test this patch.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When the abort mechanism is invoked it binds the first virtual router
(VR) to an LPM tree and inserts a default route to direct packets to the
CPU.
With VRFs, we can have router interfaces (RIFs) bound to multiple VRs,
so we need to make sure packets are trapped from all VRs and not just
the first one.
Upon abort invocation, bind all active VRs to the same LPM tree and
insert a default route in each.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Up until now we implicitly associated all the router interfaces (RIFs)
with the first virtual router (VR). This must be changed in order to
enable VRF offload. Otherwise, a packet received via a VRF slave would
do a FIB lookup in the same table used by other VRFs.
Instead, bind the RIF to a VR according to the table where FIB lookup
should be performed for packets received via the RIF.
Currently, we only care about the MAIN and LOCAL tables (which we squash
together).
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
A virtual router (VR) is an entity within the device to which routing
tables and interfaces can be bound to. It can be used to implement VRFs.
In the initial implementation we associated the VR with a specific
protocol (e.g., IPv4) and an LPM tree. However, this isn't really
accurate, as the same VR can be used for both IPv4 and IPv6 traffic, by
binding a different LPM tree to a {VR, Proto} pair.
This patch aims to restructure the VR code according to the above logic,
so that VRs are more accurately represented by the driver's data
structures. The main motivation behind this change is to prepare the
driver for VRF offload.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|