summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2020-11-02net: ethernet: ti: am65-cpsw: prepare xmit/rx path for multi-port devices in ↵Grygorii Strashko
mac-only mode This patch adds multi-port support to TI AM65x CPSW driver xmit/rx path in preparation for adding support for multi-port devices, like Main CPSW0 on K3 J721E SoC or future CPSW3g on K3 AM64x SoC. Hence DMA channels are common/shared for all ext Ports and the RX/TX NAPI and DMA processing going to be assigned to first available netdev this patch: - ensures all RX descriptors fields are initialized; - adds synchronization for TX DMA push/pop operation (locking) as Networking core locks are not enough any more; - updates TX bql processing for every packet in am65_cpsw_nuss_tx_compl_packets() as every completed TX skb can have different ndev assigned (come from different netdevs). To avoid performance issues for existing one-port CPSW2g devices the above changes are done only for multi-port devices by splitting xmit path for one-port and multi-port devices. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: am65-cpsw: fix tx csum offload for multi mac modeGrygorii Strashko
The current implementation uses .ndo_set_features() callback to track NETIF_F_HW_CSUM feature changes and update generic CPSW_P0_CONTROL_REG.RX_CHECKSUM_EN option accordingly. It's not going to work in case of multi-port devices as TX csum offload can be changed per netdev. On K3 CPSWxG devices TX csum offload enabled in the following way: - the CPSW_P0_CONTROL_REG.RX_CHECKSUM_EN option enables TX csum offload in generic and affects all TX DMA channels and packets; - corresponding fields in TX DMA descriptor have to be filed properly when upper layer wants to offload TX csum (skb->ip_summed == CHECKSUM_PARTIAL) and it's per-packet option. The Linux Network core is expected to never request TX csum offload if netdev NETIF_F_HW_CSUM feature is disabled, and, as result, TX DMA descriptors should not be modified, and per-packet TX csum offload will be disabled (or enabled) on per-netdev basis. Which, in turn, makes it safe to enable the CPSW_P0_CONTROL_REG.RX_CHECKSUM_EN option unconditionally. Hence, fix TX csum offload for multi-port devices by: - enabling the CPSW_P0_CONTROL_REG.RX_CHECKSUM_EN option in am65_cpsw_nuss_common_open() unconditionally - and removing .ndo_set_features() callback implementation, which was used only NETIF_F_HW_CSUM feature update purposes Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: am65-cpsw: keep active if cpts enabledGrygorii Strashko
Some K3 CPSW NUSS instances can lose context after PM runtime ON->OFF->ON transition depending on integration (including all submodules: CPTS, MDIO, etc), like J721E Main CPSW (CPSW9G). In case CPTS is enabled it's initialized during probe and does not expect to be reset. Hence, keep K3 CPSW active by forbidding PM runtime if CPTS is enabled. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: am65-cpsw: fix vlan offload for multi mac modeGrygorii Strashko
The VLAN offload for AM65x CPSW2G is implemented using existing ALE APIs, which are also used by legacy CPSW drivers. So, now it always adds current Ext. Port and Host as VLAN members when VLAN is added by 8021Q core (.ndo_vlan_rx_add_vid) and forcibly removes VLAN from ALE table in .ndo_vlan_rx_kill_vid(). This works as for AM65x CPSW2G (which has only one Ext. Port) as for legacy CPSW devices (which can't support same VLAN on more then one Port in multi mac (dual-mac) mode). But it doesn't work for the new J721E and AM64x multi port CPSWxG versions doesn't have such restrictions and allow to offload the same VLAN on any number of ports. Now the attempt to add same VLAN on two (or more) K3 CPSWxG Ports will cause: - VLAN members mask overwrite when VLAN is added - VLAN removal from ALE table when any Port removes VLAN This patch fixes an issue by: - switching to use cpsw_ale_vlan_add_modify() instead of cpsw_ale_add_vlan() when VLAN is added to ALE table, so VLAN members mask will not be overwritten; - Updates cpsw_ale_del_vlan() as: if more than one ext. Port is in VLAN member mask then remove only current port from VLAN member mask else remove VLAN ALE entry Example: add: P1 | P0 (Host) -> members mask: P1 | P0 add: P2 | P0 -> members mask: P2 | P1 | P0 rem: P1 | P0 -> members mask: P2 | P0 rem: P2 | P0 -> members mask: - The VLAN is forcibly removed if port_mask=0 passed to cpsw_ale_del_vlan() to preserve existing legacy CPSW drivers functionality. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: cpsw_ale: add cpsw_ale_vlan_del_modify()Grygorii Strashko
Add/export cpsw_ale_vlan_del_modify() and use it in cpsw_switchdev instead of generic cpsw_ale_del_vlan() to avoid mixing 8021Q and switchdev VLAN offload. This is preparation patch equired by follow up changes. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: am65-cpsw: use cppi5_desc_is_tdcm()Grygorii Strashko
Use cppi5_desc_is_tdcm() helper for teardown indicator detection instead of hard-coded value. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: am65-cpsw: move free desc queue mode selection in pdataGrygorii Strashko
In preparation of adding more multi-port K3 CPSW versions move free descriptor queue mode selection in am65_cpsw_pdata, so it can be selected basing on DT compatibility property. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ethernet: ti: am65-cpsw: move ale selection in pdataGrygorii Strashko
In preparation of adding more multi-port K3 CPSW versions move ALE selection in am65_cpsw_pdata, so it can be selected basing on DT compatibility property. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: driver: hamradio: Fix potential unterminated stringAndrew Lunn
With W=1 the following error is reported: In function ‘strncpy’, inlined from ‘hdlcdrv_ioctl’ at drivers/net/hamradio/hdlcdrv.c:600:4: ./include/linux/string.h:297:30: warning: ‘__builtin_strncpy’ specified bound 32 equals destination size [-Wstringop-truncation] 297 | #define __underlying_strncpy __builtin_strncpy | ^ ./include/linux/string.h:307:9: note: in expansion of macro ‘__underlying_strncpy’ 307 | return __underlying_strncpy(p, q, size); Replace strncpy with strlcpy to guarantee the string is terminated. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201031181700.1081693-1-andrew@lunn.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02drivers: net: wan: lmc: Fix W=1 set but used variable warningsAndrew Lunn
drivers/net/wan/lmc/lmc_main.c: In function ‘lmc_ioctl’: drivers/net/wan/lmc/lmc_main.c:356:25: warning: variable ‘mii’ set but not used [-Wunused-but-set-variable] 356 | u16 mii; | ^~~ drivers/net/wan/lmc/lmc_main.c:427:25: warning: variable ‘mii’ set but not used [-Wunused-but-set-variable] 427 | u16 mii; | ^~~ drivers/net/wan/lmc/lmc_main.c: In function ‘lmc_interrupt’: drivers/net/wan/lmc/lmc_main.c:1188:9: warning: variable ‘firstcsr’ set but not used [-Wunused-but-set-variable] 1188 | u32 firstcsr; This file has funky indentation, and makes little use of tabs. Keep with this style in the patch, but that makes checkpatch unhappy. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201031181417.1081511-1-andrew@lunn.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02drivers: net: xen-netfront: Fixed W=1 set but unused warningsAndrew Lunn
drivers/net/xen-netfront.c:2416:16: warning: variable ‘target’ set but not used [-Wunused-but-set-variable] 2416 | unsigned long target; Remove target and just discard the return value from simple_strtoul(). This patch does give a checkpatch warning, but the warning was there before anyway, as this file has lots of checkpatch warnings. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201031180435.1081127-1-andrew@lunn.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02drivers: net: davicom Add COMPILE_TEST supportAndrew Lunn
Improve the build testing of this davicom driver by enabling it when COMPILE_TEST is selected. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02drivers: net: davicom: Fixed unused but set variable with W=1Andrew Lunn
drivers/net/ethernet/davicom//dm9000.c: In function ‘dm9000_dumpblk_8bit’: drivers/net/ethernet/davicom//dm9000.c:235:6: warning: variable ‘tmp’ set but not used [-Wunused-but-set-variable] The driver needs to read packet data from the device even when the packet is known bad. There is no need to assign the data to a variable during this discard operation. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02drivers: net: tulip: Fix set but not used with W=1Andrew Lunn
When compiled for platforms other than __i386__ or __x86_64__: drivers/net/ethernet/dec/tulip/tulip_core.c: In function ‘tulip_init_one’: drivers/net/ethernet/dec/tulip/tulip_core.c:1296:13: warning: variable ‘last_irq’ set but not used [-Wunused-but-set-variable] 1296 | static int last_irq; Add more #if defined() to totally remove the code when not needed. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201031005445.1060112-1-andrew@lunn.ch Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ftgmac100: add handling of mdio/phy nodes for ast2400/2500Ivan Mikhaylov
phy-handle can't be handled well for ast2400/2500 which has an embedded MDIO controller. Add ftgmac100_mdio_setup for ast2400/2500 and initialize PHYs from mdio child node with of_mdiobus_register. Signed-off-by: Ivan Mikhaylov <i.mikhaylov@yadro.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-11-02net: ftgmac100: move phy connect out from ftgmac100_setup_mdioIvan Mikhaylov
Split MDIO registration and PHY connect into ftgmac100_setup_mdio and ftgmac100_mii_probe. Signed-off-by: Ivan Mikhaylov <i.mikhaylov@yadro.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31r8169: use pm_runtime_put_sync in rtl_open error pathHeiner Kallweit
We can safely runtime-suspend the chip if rtl_open() fails. Therefore switch the error path to use pm_runtime_put_sync() as well. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/aa093b1e-f295-5700-1cb7-954b54dd8f17@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31r8169: remove unneeded memory barrier in rtl_txHeiner Kallweit
tp->dirty_tx isn't changed outside rtl_tx(). Therefore I see no need to guarantee a specific order of reading tp->dirty_tx and tp->cur_tx. Having said that we can remove the memory barrier. In addition use READ_ONCE() when reading tp->cur_tx because it can change in parallel to rtl_tx(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/2264563a-fa9e-11b0-2c42-31bc6b8e2790@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31ne2k: Fix Typo in RW-BugfixArmin Wolf
Correct a typo in ne.c and ne2k-pci.c which prevented activation of the RW-Bugfix. Signed-off-by: Armin Wolf <W_Armin@gmx.de> Link: https://lore.kernel.org/r/20201029143357.7008-1-W_Armin@gmx.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31net: macb: add support for high speed interfaceParshuram Thombare
This patch adds support for 10GBASE-R interface to the linux driver for Cadence's ethernet controller. This controller has separate MAC's and PCS'es for low and high speed paths. High speed PCS supports 100M, 1G, 2.5G, 5G and 10G through rate adaptation implementation. However, since it doesn't support auto negotiation, linux driver is modified to support 10GBASE-R instead of USXGMII. Signed-off-by: Parshuram Thombare <pthombar@cadence.com> Link: https://lore.kernel.org/r/1603975627-18338-1-git-send-email-pthombar@cadence.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Display CGX, NIX and PF map in debugfs.Rakesh Babu
Unlike earlier silicon variants, OcteonTx2 98xx silicon has 2 NIX blocks and each of the CGX is mapped to either of the NIX blocks. Each NIX block supports 100G. Mapping btw NIX blocks and CGX is done by firmware based on CGX speed config to have a maximum possible network bandwidth. Since the mapping is not fixed, it's difficult for a user to figure out. Hence added a debugfs entry which displays mapping between CGX LMAC, NIX block and RVU PF. Sample result of this entry :: ~# cat /sys/kernel/debug/octeontx2/rvu_pf_cgx_map PCI dev RVU PF Func NIX block CGX LMAC 0002:02:00.0 0x400 NIX0 CGX0 LMAC0 Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Display NIX1 also in debugfsRakesh Babu
If NIX1 block is also implemented then add a new directory for NIX1 in debugfs root. Stats of NIX1 block can be read/writen from/to the files in directory "/sys/kernel/debug/octeontx2/nix1/". Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-pf: Calculate LBK link instead of hardcodingSubbaraya Sundeep
CGX links are followed by LBK links but number of CGX and LBK links varies between platforms. Hence get the number of links present in hardware from AF and use it to calculate LBK link number. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Mbox changes for 98xxSubbaraya Sundeep
This patch puts together all mailbox changes for 98xx silicon: Attach -> Modify resource attach mailbox handler to request LFs from a block address out of multiple blocks of same type. If a PF/VF need LFs from two blocks of same type then attach mbox should be called twice. Example: struct rsrc_attach *attach; .. Allocate memory for message .. attach->cptlfs = 3; /* 3 LFs from CPT0 */ .. Send message .. .. Allocate memory for message .. attach->modify = 1; attach->cpt_blkaddr = BLKADDR_CPT1; attach->cptlfs = 2; /* 2 LFs from CPT1 */ .. Send message .. Detach -> Update detach mailbox and its handler to detach resources from CPT1 and NIX1 blocks. MSIX -> Updated the MSIX mailbox and its handler to return MSIX offsets for the new block CPT1. Free resources -> Update free_rsrc mailbox and its handler to return the free resources count of new blocks NIX1 and CPT1 Links -> Number of CGX,LBK and SDP links may vary between platforms. For example, in 98xx number of CGX and LBK links are more than 96xx. Hence the info about number of links present in hardware is useful for consumers to request link configuration properly. This patch sends this info in nix_lf_alloc_rsp. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Add NIX1 interfaces to NPCSubbaraya Sundeep
On 98xx silicon, NPC block has additional mcam entries, counters and NIX1 interfaces. Extended set of registers are present for the new mcam entries and counters. This patch does the following: - updates the register accessing macros to use extended set if present. - configures the MKEX profile for NIX1 interfaces also. - updates mcam entry write functions to use assigned NIX0/1 interfaces for the PF/VF. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Setup MCE context for assigned NIXSubbaraya Sundeep
Initialize MCE context for the assigned NIX0/1 block for a CGX mapped PF. Modified rvu_nix_aq_enq_inst function to work with nix_hw so that MCE contexts for both NIX blocks can be inited. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Map NIX block from CGX connectionSubbaraya Sundeep
Firmware configures NIX block mapping for all CGXs to achieve maximum throughput. This patch reads the configuration and create mapping between RVU PF and NIX blocks. And for LBK VFs assign NIX0 for even numbered VFs and NIX1 for odd numbered VFs. Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Initialize NIX1 blockRakesh Babu
This patch modifies NIX functions to operate with nix_hw context so that existing functions can be used for both NIX0 and NIX1 blocks. And the NIX blocks present in the system are initialized during driver init and freed during exit. Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Manage new blocks in 98xxRakesh Babu
AF manages the tasks of allocating, freeing LFs from RVU blocks to PF and VFs. With new NIX1 and CPT1 blocks in 98xx, this patch adds support for handling new blocks too. Co-developed-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31octeontx2-af: Update get/set resource count functionsSubbaraya Sundeep
Since multiple blocks of same type are present in 98xx, modify functions which get resource count and which update resource count to work with individual block address instead of block type. Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Signed-off-by: Rakesh Babu <rsaladi2@marvell.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31net: axienet: Properly handle PCS/PMA PHY for 1000BaseX modeRobert Hancock
Update the axienet driver to properly support the Xilinx PCS/PMA PHY component which is used for 1000BaseX and SGMII modes, including properly configuring the auto-negotiation mode of the PHY and reading the negotiated state from the PHY. Signed-off-by: Robert Hancock <robert.hancock@calian.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Link: https://lore.kernel.org/r/20201028171429.1699922-1-robert.hancock@calian.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31net: ipa: avoid a bogus warningAlex Elder
The previous commit added support for IPA having up to six source and destination resources. But currently nothing uses more than four. (Five of each are used in a newer version of the hardware.) I find that in one of my build environments the compiler complains about newly-added code in two spots. Inspection shows that the warnings have no merit, but this compiler does not recognize that. ipa_main.c:457:39: warning: array index 5 is past the end of the array (which contains 4 elements) [-Warray-bounds] (and the same warning at line 483) We can make this warning go away by changing the number of elements in the source and destination resource limit arrays--now rather than waiting until we need it to support the newer hardware. This change was coming soon anyway; make it now to get rid of the warning. Signed-off-by: Alex Elder <elder@linaro.org> Link: https://lore.kernel.org/r/20201031151524.32132-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31r8169: remove no longer needed private rx/tx packet/byte countersHeiner Kallweit
After switching to the net core rx/tx byte/packet counters we can remove the now unused private version. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31r8169: use struct pcpu_sw_netstats for rx/tx packet/byte countersHeiner Kallweit
Switch to the net core rx/tx byte/packet counter infrastructure. This simplifies the code, only small drawback is some memory overhead because we use just one queue, but allocate the counters per cpu. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31net: tlan: Replace in_irq() usageSebastian Andrzej Siewior
The driver uses in_irq() to determine if the tlan_priv::lock has to be acquired in tlan_mii_read_reg() and tlan_mii_write_reg(). The interrupt handler acquires the lock outside of these functions so the in_irq() check is meant to prevent a lock recursion deadlock. But this check is incorrect when interrupt force threading is enabled because then the handler runs in thread context and in_irq() correctly returns false. The usage of in_*() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be seperated or the context be conveyed in an argument passed by the caller, which usually knows the context. tlan_set_timer() has this conditional as well, but this function is only invoked from task context or the timer callback itself. So it always has to lock and the check can be removed. tlan_mii_read_reg(), tlan_mii_write_reg() and tlan_phy_print() are invoked from interrupt and other contexts. Split out the actual function body into helper variants which are called from interrupt context and make the original functions wrappers which acquire tlan_priv::lock unconditionally. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Samuel Chessman <chessman@tux.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31net: forcedeth: Replace context and lock check with a lockdep_assert()Sebastian Andrzej Siewior
nv_update_stats() triggers a WARN_ON() when invoked from hard interrupt context because the locks in use are not hard interrupt safe. It also has an assert_spin_locked() which was the lock check before the lockdep era. Lockdep has way broader locking correctness checks and covers both issues, so replace the warning and the lock assert with lockdep_assert_held(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Rain River <rain.1986.08.12@gmail.com> Cc: Zhu Yanjun <zyjzyj2000@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-31net: neterion: s2io: Replace in_interrupt() for context detectionSebastian Andrzej Siewior
wait_for_cmd_complete() uses in_interrupt() to detect whether it is safe to sleep or not. The usage of in_interrupt() in drivers is phased out and Linus clearly requested that code which changes behaviour depending on context should either be seperated or the context be conveyed in an argument passed by the caller, which usually knows the context. in_interrupt() also is only partially correct because it fails to chose the correct code path when just preemption or interrupts are disabled. Add an argument 'may_block' to both functions and adjust the callers to pass the context information. The following call chains which end up invoking wait_for_cmd_complete() were analyzed to be safe to sleep: s2io_card_up() s2io_set_multicast() init_nic() init_tti() s2io_close() do_s2io_delete_unicast_mc() do_s2io_add_mac() s2io_set_mac_addr() do_s2io_prog_unicast() do_s2io_add_mac() s2io_reset() do_s2io_restore_unicast_mc() do_s2io_add_mc() do_s2io_add_mac() s2io_open() do_s2io_prog_unicast() do_s2io_add_mac() The following call chains which end up invoking wait_for_cmd_complete() were analyzed to be safe to sleep: __dev_set_rx_mode() s2io_set_multicast() s2io_txpic_intr_handle() s2io_link() init_tti() Add a may_sleep argument to wait_for_cmd_complete(), s2io_set_multicast() and init_tti() and hand the context information in from the call sites. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Jon Mason <jdmason@kudzu.us> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: mscc: ocelot: support L2 multicast entriesVladimir Oltean
There is one main difference in mscc_ocelot between IP multicast and L2 multicast. With IP multicast, destination ports are encoded into the upper bytes of the multicast MAC address. Example: to deliver the address 01:00:5E:11:22:33 to ports 3, 8, and 9, one would need to program the address of 00:03:08:11:22:33 into hardware. Whereas for L2 multicast, the MAC table entry points to a Port Group ID (PGID), and that PGID contains the port mask that the packet will be forwarded to. As to why it is this way, no clue. My guess is that not all port combinations can be supported simultaneously with the limited number of PGIDs, and this was somehow an issue for IP multicast but not for L2 multicast. Anyway. Prior to this change, the raw L2 multicast code was bogus, due to the fact that there wasn't really any way to test it using the bridge code. There were 2 issues: - A multicast PGID was allocated for each MDB entry, but it wasn't in fact programmed to hardware. It was dummy. - In fact we don't want to reserve a multicast PGID for every single MDB entry. That would be odd because we can only have ~60 PGIDs, but thousands of MDB entries. So instead, we want to reserve a multicast PGID for every single port combination for multicast traffic. And since we can have 2 (or more) MDB entries delivered to the same port group (and therefore PGID), we need to reference-count the PGIDs. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: mscc: ocelot: make entry_type a member of struct ocelot_multicastVladimir Oltean
This saves a re-classification of the MDB address on deletion. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: mscc: ocelot: remove the "new" variable in ocelot_port_mdb_addVladimir Oltean
It is Not Needed, a comment will suffice. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: mscc: ocelot: use ether_addr_copyVladimir Oltean
Since a helper is available for copying Ethernet addresses, let's use it. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: mscc: ocelot: classify L2 mdb entries as LOCKEDVladimir Oltean
ocelot.h says: /* MAC table entry types. * ENTRYTYPE_NORMAL is subject to aging. * ENTRYTYPE_LOCKED is not subject to aging. * ENTRYTYPE_MACv4 is not subject to aging. For IPv4 multicast. * ENTRYTYPE_MACv6 is not subject to aging. For IPv6 multicast. */ We don't want the permanent entries added with 'bridge mdb' to be subject to aging. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30sfc: advertise our vlan featuresEdward Cree
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30sfc: only use fixed-id if the skb asks for itEdward Cree
AIUI, the NETIF_F_TSO_MANGLEID flag is a signal to the stack that a driver may _need_ to mangle IDs in order to do TSO, and conversely a signal from the stack that the driver is permitted to do so. Since we support both fixed and incrementing IPIDs, we should rely on the SKB_GSO_FIXEDID flag on a per-skb basis, rather than using the MANGLEID feature to make all TSOs fixed-id. Includes other minor cleanups of ef100_make_tso_desc() coding style. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30sfc: implement encap TSO on EF100Edward Cree
The NIC only needs to know where the headers it has to edit (TCP and inner and outer IPv4) are, which fits GSO_PARTIAL nicely. It also supports non-PARTIAL offload of UDP tunnels, again just needing to be told the outer transport offset so that it can edit the UDP length field. (It's not clear to me whether the stack will ever use the non-PARTIAL version with the netdev feature flags we're setting here.) Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30sfc: extend bitfield macros to 17 fieldsEdward Cree
We need EFX_POPULATE_OWORD_17 for an encap TSO descriptor on EF100. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: ipa: avoid going past end of resource group arrayAlex Elder
The minimum and maximum limits for resources assigned to a given resource group are programmed in pairs, with the limits for two groups set in a single register. If the number of supported resource groups is odd, only half of the register that defines these limits is valid for the last group; that group has no second group in the pair. Currently we ignore this constraint, and it turns out to be harmless, but it is not guaranteed to be. This patch addresses that, and adds support for programming the 5th resource group's limits. Rework how the resource group limit registers are programmed by having a single function program all group pairs rather than having one function program each pair. Add the programming of the 4-5 resource group pair limits to this function. If a resource group is not supported, pass a null pointer to ipa_resource_config_common() for that group and have that function write zeroes in that case. Tested-by: Sujit Kautkar <sujitka@chromium.org> Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: ipa: distinguish between resource group typesAlex Elder
The number of resource groups supported by the hardware can be different for source and destination resources. Determine the number supported for each using separate functions. Make the functions inline end move their definitions into "ipa_reg.h", because they determine whether certain register definitions are valid. Pass just the IPA hardware version as argument. IPA_RESOURCE_GROUP_COUNT represents the maximum number of resource groups the driver supports for any hardware version. Change that symbol to be two separate constants, one for source and the other for destination resource groups. Rename them to end with "_MAX" rather than "_COUNT", to reflect their true purpose. Tested-by: Sujit Kautkar <sujitka@chromium.org> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: ipa: assign endpoint to a resource groupAlex Elder
The IPA hardware manages various resources (e.g. descriptors) internally to perform its functions. The resources are grouped, allowing different endpoints to use separate resource pools. This way one group of endpoints can be configured to operate unaffected by the resource use of endpoints in a different group. Endpoints should be assigned to a resource group, but we currently don't do that. Define a new resource_group field in the endpoint configuration data, and use it to assign the proper resource group to use for each AP endpoint. Tested-by: Sujit Kautkar <sujitka@chromium.org> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-30net: ipa: fix resource group field mask definitionAlex Elder
The mask for the RSRC_GRP field in the INIT_RSRC_GRP endpoint initialization register is incorrectly defined for IPA v4.2 (where it is only one bit wide). So we need to fix this. The fix is not straightforward, however. Field masks are passed to functions like u32_encode_bits(), and for that they must be constant. To address this, we define a new inline function that returns the *encoded* value to use for a given RSRC_GRP field, which depends on the IPA version. The caller can then use something like this, to assign a given endpoint resource id 1: u32 offset = IPA_REG_ENDP_INIT_RSRC_GRP_N_OFFSET(endpoint_id); u32 val = rsrc_grp_encoded(ipa->version, 1); iowrite32(val, ipa->reg_virt + offset); The next patch requires this fix. Tested-by: Sujit Kautkar <sujitka@chromium.org> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>