summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-07-20qede: introduce support for FEC controlAlexander Lobakin
Add Ethtool callbacks for querying and setting FEC parameters if it's supported by the underlying qed module and MFW version running on the device. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qede: format qede{,_vf}_ethtool_opsAlexander Lobakin
Prior to adding new callbacks, format qede ethtool_ops structs to make declarations more fancy and readable. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qed: add support for Forward Error CorrectionAlexander Lobakin
Add all necessary routines for reading supported FEC modes from NVM and querying FEC control to the MFW (if the running version supports it). Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qed: reformat several structures a bitAlexander Lobakin
Prior to adding new fields and bitfields, reformat the related structures according to the Linux style (spaces to tabs, lowercase hex, indentation etc.). Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qed: use transceiver data to fill link partner's advertising speedsAlexander Lobakin
Currently qed driver does not take into consideration transceiver's capabilities when generating link partner's speed advertisement. This leads to e.g. incorrect ethtool link info on 10GbaseT modules. Use transceiver info not only for advertisement and support arrays, but also for link partner's abilities to fix it. Misc: fix a couple of comments nearby. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qed: add support for multi-rate transceiversAlexander Lobakin
Set the corresponding advertised and supported link modes according to the detected transceiver type and device capabilities. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qed: reformat public_port::transceiver_data a bitAlexander Lobakin
Prior to adding new bitfields, reformat the existing ones from spaces to tabs, and unify all hex values to lowercase. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qede: populate supported link modes maps on module initAlexander Lobakin
Simplify and lighten qede_set_link_ksettings() by declaring static link modes maps and populating them on module init. This way we save plenty of text size at the low expense of __ro_after_init and __initconst data (the latter will be purged after module init is done). Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20qed, qede, qedf: convert link mode from u32 to ETHTOOL_LINK_MODEAlexander Lobakin
Currently qed driver already ran out of 32 bits to store link modes, and this doesn't allow to add and support more speeds. Convert custom link mode to generic Ethtool bitmap and definitions (convenient Phylink shorthands are used for elegance and readability). This allowed us to drop all conversions/mappings between the driver and Ethtool. This involves changes in qede and qedf as well, as they used definitions from shared "qed_if.h". Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20linkmode: introduce linkmode_intersects()Alexander Lobakin
Add a new helper to find intersections between Ethtool link modes, linkmode_intersects(), similar to the other linkmode helpers. Signed-off-by: Alexander Lobakin <alobakin@marvell.com> Signed-off-by: Igor Russkikh <irusskikh@marvell.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20Merge tag 'wireless-drivers-next-2020-07-20' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== wireless-drivers-next patches for v5.9 First set of patches for v5.9. This comes later than usual as I was offline for two weeks. The biggest change here is moving Microchip wilc1000 driver from staging. There was an immutable topic branch with one commit moving the whole driver and the topic branch was pulled both to staging-next and wireless-drivers-next. At the moment the only reported conflict is in MAINTAINERS file, so I'm hoping the move should go smoothly. Other notable changes are ath11k getting 6 GHz band support and rtw88 supporting RTL8821CE. And there's also the usual fixes, API changes and cleanups all over. Major changes: wilc1000 * move from drivers/staging to drivers/net/wireless/microchip ath11k * add 6G band support * add spectral scan support iwlwifi * make FW reconfiguration quieter by not using warn level rtw88 * add support for RTL8821CE ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20sis: switch from 'pci_' to 'dma_' APIChristophe JAILLET
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'epic_init_one()' (sis190.c), GFP_KERNEL can be used because this is a net_device_ops' 'ndo_open' function. This function is protected by the rtnl_lock() semaphore. So only a mutex is used and no spin_lock is acquired. When memory is allocated in 'sis900_probe()' (sis900.c), GFP_KERNEL can be used because it is a probe function and no spin_lock is acquired. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20r6040: switch from 'pci_' to 'dma_' APIChristophe JAILLET
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'r6040_open()', GFP_KERNEL can be used because this is a net_device_ops' 'ndo_open' function. This function is protected by the rtnl_lock() semaphore. So only a mutex is used and no spin_lock is acquired. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: packetengines: switch from 'pci_' to 'dma_' APIChristophe JAILLET
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below and has been hand modified to replace GFP_ with a correct flag. It has been compile tested. When memory is allocated in 'hamachi_init_one()' (hamachi.c), GFP_KERNEL can be used because it is a probe function and no lock is acquired. When memory is allocated in 'yellowfin_init_one()' (yellowfin.c), GFP_KERNEL can be used because it is a probe function and no lock is acquired. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20arch, net: remove the last csum_partial_copy() leftoversChristoph Hellwig
Most of the tree only uses and implements csum_partial_copy_nocheck, but the c6x and lib/checksum.c implement a csum_partial_copy that isn't used anywere except to define csum_partial_copy. Get rid of this pointless alias. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: fs_enet: remove redundant null checkZhang Changzhong
Because clk_prepare_enable and clk_disable_unprepare already checked NULL clock parameter, so the additional checks are unnecessary, just remove them. Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20Merge branch 'net-macb-Wake-on-Lan-magic-packet-GEM-and-MACB-handling'David S. Miller
Nicolas Ferre says: ==================== net: macb: Wake-on-Lan magic packet GEM and MACB handling Here is the second part of support for WoL magic-packet on the current macb driver. This one is addressing the bulk of the feature and is based on current net-next/master. MACB and GEM code must co-exist and as they don't share exactly the same register layout, I had to specialize a bit the suspend/resume paths and plug a specific IRQ handler in order to avoid overloading the "normal" IRQ hot path. These changes were tested on both sam9x60 which embeds a MACB+FIFO controller and sama5d2 which has a GEM+packet buffer type of controller. Best regards, Nicolas Changes in v7: - Release the spinlock before exiting macb_suspend/resume in case of error changing IRQ handler Changes in v6: - rebase on net-next/master now that the "fixes" patches of the series are merged in both net and net-next. - GEM addition and MACB update to finish the support of WoL magic-packet on the two revisions of the controller. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: macb: Add WoL interrupt support for MACB type of Ethernet controllerNicolas Ferre
Handle the Wake-on-Lan interrupt for the Cadence MACB Ethernet controller. As we do for the GEM version, we handle of WoL interrupt in a specialized interrupt handler for MACB version that is positionned just between suspend() and resume() calls. Cc: Claudiu Beznea <claudiu.beznea@microchip.com> Cc: Harini Katakam <harini.katakam@xilinx.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: macb: WoL support for GEM type of Ethernet controllerNicolas Ferre
Adapt the Wake-on-Lan feature to the Cadence GEM Ethernet controller. This controller has different register layout and cannot be handled by previous code. We disable completely interrupts on all the queues but the queue 0. Handling of WoL interrupt is done in another interrupt handler positioned depending on the controller version used, just between suspend() and resume() calls. It allows to lower pressure on the generic interrupt hot path by removing the need to handle 2 tests for each IRQ: the first figuring out the controller revision, the second for actually knowing if the WoL bit is set. Queue management in suspend()/resume() functions inspired from RFC patch by Harini Katakam <harinik@xilinx.com>, thanks! Cc: Claudiu Beznea <claudiu.beznea@microchip.com> Cc: Harini Katakam <harini.katakam@xilinx.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20sched: sch_api: add missing rcu read lock to silence the warningJiri Pirko
In case the qdisc_match_from_root function() is called from non-rcu path with rtnl mutex held, a suspiciout rcu usage warning appears: [ 241.504354] ============================= [ 241.504358] WARNING: suspicious RCU usage [ 241.504366] 5.8.0-rc4-custom-01521-g72a7c7d549c3 #32 Not tainted [ 241.504370] ----------------------------- [ 241.504378] net/sched/sch_api.c:270 RCU-list traversed in non-reader section!! [ 241.504382] other info that might help us debug this: [ 241.504388] rcu_scheduler_active = 2, debug_locks = 1 [ 241.504394] 1 lock held by tc/1391: [ 241.504398] #0: ffffffff85a27850 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x49a/0xbd0 [ 241.504431] stack backtrace: [ 241.504440] CPU: 0 PID: 1391 Comm: tc Not tainted 5.8.0-rc4-custom-01521-g72a7c7d549c3 #32 [ 241.504446] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-2.fc32 04/01/2014 [ 241.504453] Call Trace: [ 241.504465] dump_stack+0x100/0x184 [ 241.504482] lockdep_rcu_suspicious+0x153/0x15d [ 241.504499] qdisc_match_from_root+0x293/0x350 Fix this by passing the rtnl held lockdep condition down to hlist_for_each_entry_rcu() Reported-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: ena: Fix using plain integer as NULL pointer in ena_init_napi_in_rangeWang Hai
Fix sparse build warning: drivers/net/ethernet/amazon/ena/ena_netdev.c:2193:34: warning: Using plain integer as NULL pointer Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wang Hai <wanghai38@huawei.com> Suggested-by: Joe Perches <joe@perches.com> Acked-by: Shay Agroskin <shayagr@amazon.com> Acked-by: Shay Agroskin <shayagr@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: hns: use eth_broadcast_addr() to assign broadcast addressXu Wang
This patch is to use eth_broadcast_addr() to assign broadcast address insetad of memset(). Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20Merge branch 'net-dsa-Setup-dsa_netdev_ops'David S. Miller
Florian Fainelli says: ==================== net: dsa: Setup dsa_netdev_ops This patch series addresses the overloading of a DSA CPU/management interface's netdev_ops for the purpose of providing useful information from the switch side. Up until now we had duplicated the existing netdev_ops structure and added specific function pointers to return information of interest. Here we have a more controlled way of doing this by involving the specific netdev_ops function pointers that we want to be patched, which is easier for auditing code in the future. As a byproduct we can now maintain netdev_ops pointer comparisons which would be failing before (no known in tree problems because of that though). Let me know if this approach looks reasonable to you and we might do the same with our ethtool_ops overloading as well. Changes in v2: - use static inline int vs. static int inline (Kbuild robot) - fixed typos in patch 4 (Andrew) - avoid using macros (Andrew) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: dsa: Setup dsa_netdev_opsFlorian Fainelli
Now that we have all the infrastructure in place for calling into the dsa_ptr->netdev_ops function pointers, install them when we configure the DSA CPU/management interface and tear them down. The flow is unchanged from before, but now we preserve equality of tests when network device drivers do tests like dev->netdev_ops == &foo_ops which was not the case before since we were allocating an entirely new structure. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: Call into DSA netdevice_ops wrappersFlorian Fainelli
Make the core net_device code call into our ndo_do_ioctl() and ndo_get_phys_port_name() functions via the wrappers defined previously Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: dsa: Add wrappers for overloaded ndo_opsFlorian Fainelli
Add definitions for the dsa_netdevice_ops structure which is a subset of the net_device_ops structure for the specific operations that we care about overlaying on top of the DSA CPU port net_device and provide inline stubs that take core managing whether DSA code is reachable. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: Wrap ndo_do_ioctl() to prepare for DSA stacked opsFlorian Fainelli
In preparation for adding another layer of call into a DSA stacked ops singleton, wrap the ndo_do_ioctl() call into dev_do_ioctl(). Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-20net: vxge-main: Remove unnecessary cast in kfree()Xu Wang
Remove unnecassary casts in the argument to kfree. Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19Merge branch 'Fully-describe-the-waveform-for-PTP-periodic-output'David S. Miller
Vladimir Oltean says: ==================== Fully describe the waveform for PTP periodic output While using the ancillary pin functionality of PTP hardware clocks to synchronize multiple DSA switches on a board, a need arised to be able to configure the duty cycle of the master of this PPS hierarchy. Also, the PPS master is not able to emit PPS starting from arbitrary absolute times, so a new flag is introduced to support such hardware without making guesses. With these patches, struct ptp_perout_request now basically describes a general-purpose square wave. Changes in v2: Made sure this applies to net-next. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19net: mscc: ocelot: add support for PTP waveform configurationVladimir Oltean
For PPS output (perout period is 1.000000000), accept the new "phase" parameter from the periodic output request structure. For both PPS and freeform output, accept the new "on" argument for specifying the duty cycle of the generated signal. Preserve the old defaults for this "on" time: 1 us for PPS, and half the period for freeform output. Also preserve the old behavior that accepted the "phase" via the "start" argument. Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19ptp: introduce a phase offset in the periodic output requestVladimir Oltean
Some PHCs like the ocelot/felix switch cannot emit generic periodic output, but just PPS (pulse per second) signals, which: - don't start from arbitrary absolute times, but are rather phase-aligned to the beginning of [the closest next] second. - have an optional phase offset relative to that beginning of the second. For those, it was initially established that they should reject any other absolute time for the PTP_PEROUT_REQUEST than 0.000000000 [1]. But when it actually came to writing an application [2] that makes use of this functionality, we realized that we can't really deal generically with PHCs that support absolute start time, and with PHCs that don't, without an explicit interface. Namely, in an ideal world, PHC drivers would ensure that the "perout.start" value written to hardware will result in a functional output. This means that if the PTP time has become in the past of this PHC's current time, it should be automatically fast-forwarded by the driver into a close enough future time that is known to work (note: this is necessary only if the hardware doesn't do this fast-forward by itself). But we don't really know what is the status for PHC drivers in use today, so in the general sense, user space would be risking to have a non-functional periodic output if it simply asked for a start time of 0.000000000. So let's introduce a flag for this type of reduced-functionality hardware, named PTP_PEROUT_PHASE. The start time is just "soon", the only thing we know for sure about this signal is that its rising edge events, Rn, occur at: Rn = perout.phase + n * perout.period The "phase" in the periodic output structure is simply an alias to the "start" time, since both cannot logically be specified at the same time. Therefore, the binary layout of the structure is not affected. [1]: https://patchwork.ozlabs.org/project/netdev/patch/20200320103726.32559-7-yangbo.lu@nxp.com/ [2]: https://www.mail-archive.com/linuxptp-devel@lists.sourceforge.net/msg04142.html Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19ptp: add ability to configure duty cycle for periodic outputVladimir Oltean
There are external event timestampers (PHCs with support for PTP_EXTTS_REQUEST) that timestamp both event edges. When those edges are very close (such as in the case of a short pulse), there is a chance that the collected timestamp might be of the rising, or of the falling edge, we never know. There are also PHCs capable of generating periodic output with a configurable duty cycle. This is good news, because we can space the rising and falling edge out enough in time, that the risks to overrun the 1-entry timestamp FIFO of the extts PHC are lower (example: the perout PHC can be configured for a period of 1 second, and an "on" time of 0.5 seconds, resulting in a duty cycle of 50%). A flag is introduced for signaling that an on time is present in the perout request structure, for preserving compatibility. Logically speaking, the duty cycle cannot exceed 100% and the PTP core checks for this. PHC drivers that don't support this flag emit a periodic output of an unspecified duty cycle, same as before. The duty cycle is encoded as an "on" time, similar to the "start" and "period" times, and reuses the reserved space while preserving overall binary layout. Pahole reported before: struct ptp_perout_request { struct ptp_clock_time start; /* 0 16 */ struct ptp_clock_time period; /* 16 16 */ unsigned int index; /* 32 4 */ unsigned int flags; /* 36 4 */ unsigned int rsv[4]; /* 40 16 */ /* size: 56, cachelines: 1, members: 5 */ /* last cacheline: 56 bytes */ }; And now: struct ptp_perout_request { struct ptp_clock_time start; /* 0 16 */ struct ptp_clock_time period; /* 16 16 */ unsigned int index; /* 32 4 */ unsigned int flags; /* 36 4 */ union { struct ptp_clock_time on; /* 40 16 */ unsigned int rsv[4]; /* 40 16 */ }; /* 40 16 */ /* size: 56, cachelines: 1, members: 5 */ /* last cacheline: 56 bytes */ }; Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19icmp: support rfc 4884Willem de Bruijn
Add setsockopt SOL_IP/IP_RECVERR_4884 to return the offset to an extension struct if present. ICMP messages may include an extension structure after the original datagram. RFC 4884 standardized this behavior. It stores the offset in words to the extension header in u8 icmphdr.un.reserved[1]. The field is valid only for ICMP types destination unreachable, time exceeded and parameter problem, if length is at least 128 bytes and entire packet does not exceed 576 bytes. Return the offset to the start of the extension struct when reading an ICMP error from the error queue, if it matches the above constraints. Do not return the raw u8 field. Return the offset from the start of the user buffer, in bytes. The kernel does not return the network and transport headers, so subtract those. Also validate the headers. Return the offset regardless of validation, as an invalid extension must still not be misinterpreted as part of the original datagram. Note that !invalid does not imply valid. If the extension version does not match, no validation can take place, for instance. For backward compatibility, make this optional, set by setsockopt SOL_IP/IP_RECVERR_RFC4884. For API example and feature test, see github.com/wdebruij/kerneltools/blob/master/tests/recv_icmp_v2.c For forward compatibility, reserve only setsockopt value 1, leaving other bits for additional icmp extensions. Changes v1->v2: - convert word offset to byte offset from start of user buffer - return in ee_data as u8 may be insufficient - define extension struct and object header structs - return len only if constraints met - if returning len, also validate Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19Merge branch 'rework-mvneta-napi_poll-loop-for-XDP-multi-buffers'David S. Miller
Lorenzo Bianconi says: ==================== rework mvneta napi_poll loop for XDP multi-buffers Rework mvneta_rx_swbm routine in order to process all rx descriptors before building the skb or run the xdp program attached to the interface. Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get the skb_shared_info pointer from xdp_buff or xdp_frame. This is a preliminary series to enable multi-buffers and jumbo frames for XDP according to [1] [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org Changes since v1: - rely on skb_frag_* utility routines to access page/offset/len of the xdp multi-buffer ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19net: mvneta: move rxq->left_size on the stackLorenzo Bianconi
Allocate rxq->left_size on mvneta_rx_swbm stack since it is used just in sw bm napi_poll Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19net: mvneta: get rid of skb in mvneta_rx_queueLorenzo Bianconi
Remove skb pointer in mvneta_rx_queue data structure since it is no longer used Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19net: mvneta: drop all fragments in XDP_DROPLorenzo Bianconi
Release all consumed pages if the eBPF program returns XDP_DROP for XDP multi-buffers Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19net: mvneta: move mvneta_run_xdp after descriptors processingLorenzo Bianconi
Move mvneta_run_xdp routine after all descriptor processing. This is a preliminary patch to enable multi-buffers and JUMBO frames support for XDP Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19net: mvneta: move skb build after descriptors processingLorenzo Bianconi
Move skb build after all descriptors processing. This is a preliminary patch to enable multi-buffers and JUMBO frames support for XDP. Introduce mvneta_xdp_put_buff routine to release all pages used by a XDP multi-buffer Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19xdp: introduce xdp_get_shared_info_from_{buff, frame} utility routinesLorenzo Bianconi
Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get skb_shared_info from xdp buffer/frame pointer. xdp_get_shared_info_from_{buff,frame} will be used to implement xdp multi-buffer support Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19Merge branch 'do-a-single-memdup_user-in-sctp_setsockopt-v2'David S. Miller
Christoph Hellwig says: ==================== do a single memdup_user in sctp_setsockopt v2 here is a resend of my series to lift the copy_from_user out of the individual sctp sockopt handlers into the main sctp_setsockopt routine. Changes since v1: - fixes a few sizeof calls. - use memzero_explicit in sctp_setsockopt_auth_key instead of special casing it for a kzfree in the caller - remove some minor cleanups from sctp_setsockopt_autoclose to keep it closer to the existing version - add another little only vaguely related cleanup patch ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: remove the out_nounlock label in sctp_setsockoptChristoph Hellwig
This is just used once, and a direct return for the redirect to the AF case is much easier to follow than jumping to the end of a very long function. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_pf_exposeChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_ecn_supportedChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_auth_supportedChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_eventChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_eventChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_reuse_portChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_interleaving_supportedChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-07-19sctp: pass a kernel pointer to sctp_setsockopt_scheduler_valueChristoph Hellwig
Use the kernel pointer that sctp_setsockopt has available instead of directly handling the user pointer. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>