summaryrefslogtreecommitdiff
path: root/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
AgeCommit message (Collapse)Author
2021-04-16iommu: Streamline registration interfaceRobin Murphy
Rather than have separate opaque setter functions that are easy to overlook and lead to repetitive boilerplate in drivers, let's pass the relevant initialisation parameters directly to iommu_device_register(). Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ab001b87c533b6f4db71eb90db6f888953986c36.1617285386.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-04-16iommu: Statically set module ownerRobin Murphy
It happens that the 3 drivers which first supported being modular are also ones which play games with their pgsize_bitmap, so have non-const iommu_ops where dynamically setting the owner manages to work out OK. However, it's less than ideal to force that upon all drivers which want to be modular - like the new sprd-iommu driver which now has a potential bug in that regard - so let's just statically set the module owner and let ops remain const wherever possible. Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/31423b99ff609c3d4b291c701a7a7a810d9ce8dc.1617285386.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-04-16Merge branches 'iommu/fixes', 'arm/mediatek', 'arm/smmu', 'arm/exynos', ↵Joerg Roedel
'unisoc', 'x86/vt-d', 'x86/amd' and 'core' into next
2021-04-07iommu/arm-smmu-v3: Remove the unused fields for PREFETCH_CONFIG commandZenghui Yu
Per SMMUv3 spec, there is no Size and Addr field in the PREFETCH_CONFIG command and they're not used by the driver. Remove them. We can add them back if we're going to use PREFETCH_ADDR in the future. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20210407084448.1838-1-yuzenghui@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-04-07iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUERobin Murphy
Instead make the global iommu_dma_strict paramete in iommu.c canonical by exporting helpers to get and set it and use those directly in the drivers. This make sure that the iommu.strict parameter also works for the AMD and Intel IOMMU drivers on x86. As those default to lazy flushing a new IOMMU_CMD_LINE_STRICT is used to turn the value into a tristate to represent the default if not overriden by an explicit parameter. [ported on top of the other iommu_attr changes and added a few small missing bits] Signed-off-by: Robin Murphy <robin.murphy@arm.com>. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210401155256.298656-19-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-04-07iommu: remove DOMAIN_ATTR_NESTINGChristoph Hellwig
Use an explicit enable_nesting method instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Li Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-17-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-04-07iommu/arm-smmu-v3: Maintain a SID->device structureJean-Philippe Brucker
When handling faults from the event or PRI queue, we need to find the struct device associated with a SID. Add a rb_tree to keep track of SIDs. Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401154718.307519-8-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-04-07iommu/arm-smmu-v3: Use device properties for pasid-num-bitsJean-Philippe Brucker
The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Hanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/20210401154718.307519-3-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-03-30iommu/arm-smmu-v3: Add a check to avoid invalid iotlb syncXiang Chen
It may send a invalid tlb sync for smmuv3 if iotlb_gather is not valid (iotlb_gather->pgsize = 0). So add a check to avoid invalid iotlb sync for it. Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com> Link: https://lore.kernel.org/r/1617109106-121844-1-git-send-email-chenxiang66@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-01Merge branch 'for-joerg/mtk' into for-joerg/arm-smmu/updatesWill Deacon
Merge in Mediatek support from Yong Wu which introduces significant changes to the TLB invalidation and Arm short-descriptor code in the io-pgtable layer. * for-joerg/mtk: (40 commits) MAINTAINERS: Add entry for MediaTek IOMMU iommu/mediatek: Add mt8192 support iommu/mediatek: Remove unnecessary check in attach_device iommu/mediatek: Support master use iova over 32bit iommu/mediatek: Add iova reserved function iommu/mediatek: Support for multi domains iommu/mediatek: Add get_domain_id from dev->dma_range_map iommu/mediatek: Add iova_region structure iommu/mediatek: Move geometry.aperture updating into domain_finalise iommu/mediatek: Move domain_finalise into attach_device iommu/mediatek: Adjust the structure iommu/mediatek: Support report iova 34bit translation fault in ISR iommu/mediatek: Support up to 34bit iova in tlb flush iommu/mediatek: Add power-domain operation iommu/mediatek: Add pm runtime callback iommu/mediatek: Add device link for smi-common and m4u iommu/mediatek: Add error handle for mtk_iommu_probe iommu/mediatek: Move hw_init into attach_device iommu/mediatek: Update oas for v7s iommu/mediatek: Add a flag for iova 34bits case ...
2021-01-27iommu: Switch gather->end to the inclusive endYong Wu
Currently gather->end is "unsigned long" which may be overflow in arch32 in the corner case: 0xfff00000 + 0x100000(iova + size). Although it doesn't affect the size(end - start), it affects the checking "gather->end < end" This patch changes this "end" to the real end address (end = start + size - 1). Correspondingly, update the length to "end - start + 1". Fixes: a7d20dc19d9e ("iommu: Introduce struct iommu_iotlb_gather for batching TLB flushes") Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107122909.16317-5-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Add support for VHEJean-Philippe Brucker
ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow to run a host kernel at EL2. When using normal DMA, Device and CPU address spaces are dissociated, and do not need to implement the same capabilities, so VHE hasn't been used in the SMMU until now. With shared address spaces however, ASIDs are shared between MMU and SMMU, and broadcast TLB invalidations issued by a CPU are taken into account by the SMMU. TLB entries on both sides need to have identical exception level in order to be cleared with a single invalidation. When the CPU is using VHE, enable VHE in the SMMU for all STEs. Normal DMA mappings will need to use TLBI_EL2 commands instead of TLBI_NH, but shouldn't be otherwise affected by this change. Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-4-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Make BTM optional for SVAJean-Philippe Brucker
When BTM isn't supported by the SMMU, send invalidations on the command queue. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-3-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Split arm_smmu_tlb_inv_range()Jean-Philippe Brucker
Extract some of the cmd initialization and the ATC invalidation from arm_smmu_tlb_inv_range(), to allow an MMU notifier to invalidate a VA range by ASID. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210122151054.2833521-2-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Use DEFINE_RES_MEM() to simplify codeZhen Lei
No functional change. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20210122131448.1167-1-thunder.leizhen@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2021-01-22iommu/arm-smmu-v3: Remove the page 1 fixupRobin Murphy
Since we now keep track of page 1 via a separate pointer that already encapsulates aliasing to page 0 as necessary, we can remove the clunky fixup routine and simply use the relevant bases directly. The current architecture spec (IHI0070D.a) defines SMMU_{EVENTQ,PRIQ}_{PROD,CONS} as offsets relative to page 1, so the cleanup represents a little bit of convergence as well as just lines of code saved. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/08d9bda570bb5681f11a2f250a31be9ef763b8c5.1611238182.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-12-08iommu/io-pgtable: Remove tlb_flush_leafRobin Murphy
The only user of tlb_flush_leaf is a particularly hairy corner of the Arm short-descriptor code, which wants a synchronous invalidation to minimise the races inherent in trying to split a large page mapping. This is already far enough into "here be dragons" territory that no sensible caller should ever hit it, and thus it really doesn't need optimising. Although using tlb_flush_walk there may technically be more heavyweight than needed, it does the job and saves everyone else having to carry around useless baggage. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/9844ab0c5cb3da8b2f89c6c2da16941910702b41.1606324115.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-12-08Merge branch 'for-next/iommu/svm' into for-next/iommu/coreWill Deacon
More steps along the way to Shared Virtual {Addressing, Memory} support for Arm's SMMUv3, including the addition of a helper library that can be shared amongst other IOMMU implementations wishing to support this feature. * for-next/iommu/svm: iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind() iommu/sva: Add PASID helpers iommu/ioasid: Add ioasid references
2020-11-23iommu/arm-smmu-v3: Hook up ATC invalidation to mm opsJean-Philippe Brucker
The invalidate_range() notifier is called for any change to the address space. Perform the required ATC invalidations. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-5-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-23iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind()Jean-Philippe Brucker
The sva_bind() function allows devices to access process address spaces using a PASID (aka SSID). (1) bind() allocates or gets an existing MMU notifier tied to the (domain, mm) pair. Each mm gets one PASID. (2) Any change to the address space calls invalidate_range() which sends ATC invalidations (in a subsequent patch). (3) When the process address space dies, the release() notifier disables the CD to allow reclaiming the page tables. Since release() has to be light we do not instruct device drivers to stop DMA here, we just ignore incoming page faults from this point onwards. To avoid any event 0x0a print (C_BAD_CD) we disable translation without clearing CD.V. PCIe Translation Requests and Page Requests are silently denied. Don't clear the R bit because the S bit can't be cleared when STALL_MODEL==0b10 (forced), and clearing R without clearing S is useless. Faulting transactions will stall and will be aborted by the IOPF handler. (4) After stopping DMA, the device driver releases the bond by calling unbind(). We release the MMU notifier, free the PASID and the bond. Three structures keep track of bonds: * arm_smmu_bond: one per {device, mm} pair, the handle returned to the device driver for a bind() request. * arm_smmu_mmu_notifier: one per {domain, mm} pair, deals with ATS/TLB invalidations and clearing the context descriptor on mm exit. * arm_smmu_ctx_desc: one per mm, holds the pinned ASID and pgd. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-4-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-11-10iommu/arm-smmu-v3: Assign boolean values to a bool variableKaixu Xia
Fix the following coccinelle warnings: ./drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:36:12-26: WARNING: Assignment of 0/1 to bool variable Signed-off-by: Kaixu Xia <kaixuxia@tencent.com> Link: https://lore.kernel.org/r/1604744439-6846-1-git-send-email-kaixuxia@tencent.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Add SVA device featureJean-Philippe Brucker
Implement the IOMMU device feature callbacks to support the SVA feature. At the moment dev_has_feat() returns false since I/O Page Faults and BTM aren't yet implemented. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-12-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Check for SVA featuresJean-Philippe Brucker
Aggregate all sanity-checks for sharing CPU page tables with the SMMU under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to check FEAT_ATS and FEAT_PRI. For platform SVA, they will have to check FEAT_STALLS. Introduce ARM_SMMU_FEAT_BTM (Broadcast TLB Maintenance), but don't enable it at the moment. Since the entire VMID space is shared with the CPU, enabling DVM (by clearing SMMU_CR2.PTM) could result in over-invalidation and affect performance of stage-2 mappings. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20200918101852.582559-11-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Seize private ASIDJean-Philippe Brucker
The SMMU has a single ASID space, the union of shared and private ASID sets. This means that the SMMU driver competes with the arch allocator for ASIDs. Shared ASIDs are those of Linux processes, allocated by the arch, and contribute in broadcast TLB maintenance. Private ASIDs are allocated by the SMMU driver and used for "classic" map/unmap DMA. They require command-queue TLB invalidations. When we pin down an mm_context and get an ASID that is already in use by the SMMU, it belongs to a private context. We used to simply abort the bind, but this is unfair to users that would be unable to bind a few seemingly random processes. Try to allocate a new private ASID for the context, and make the old ASID shared. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-10-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Share process page tablesJean-Philippe Brucker
With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR, MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split into two sets, shared and private. Shared ASIDs correspond to those obtained from the arch ASID allocator, and private ASIDs are used for "classic" map/unmap DMA. A possible conflict happens when trying to use a shared ASID that has already been allocated for private use by the SMMU driver. This will be addressed in a later patch by replacing the private ASID. At the moment we return -EBUSY. Each mm_struct shared with the SMMU will have a single context descriptor. Add a refcount to keep track of this. It will be protected by the global SVA lock. Introduce a new arm-smmu-v3-sva.c file and the CONFIG_ARM_SMMU_V3_SVA option to let users opt in SVA support. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-9-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Move definitions to a headerJean-Philippe Brucker
Allow sharing structure definitions with the upcoming SVA support for Arm SMMUv3, by moving them to a separate header. We could surgically extract only what is needed but keeping all definitions in one place looks nicer. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20200918101852.582559-8-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-28iommu/arm-smmu-v3: Ensure queue is read after updating prod pointerZhou Wang
Reading the 'prod' MMIO register in order to determine whether or not there is valid data beyond 'cons' for a given queue does not provide sufficient dependency ordering, as the resulting access is address dependent only on 'cons' and can therefore be speculated ahead of time, potentially allowing stale data to be read by the CPU. Use readl() instead of readl_relaxed() when updating the shadow copy of the 'prod' pointer, so that all speculated memory reads from the corresponding queue can occur only from valid slots. Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com> Link: https://lore.kernel.org/r/1601281922-117296-1-git-send-email-wangzhou1@hisilicon.com [will: Use readl() instead of explicit barrier. Update 'cons' side to match.] Signed-off-by: Will Deacon <will@kernel.org>
2020-09-21iommu/arm-smmu-v3: Fix endianness annotationsJean-Philippe Brucker
When building with C=1, sparse reports some issues regarding endianness annotations: arm-smmu-v3.c:221:26: warning: cast to restricted __le64 arm-smmu-v3.c:221:24: warning: incorrect type in assignment (different base types) arm-smmu-v3.c:221:24: expected restricted __le64 [usertype] arm-smmu-v3.c:221:24: got unsigned long long [usertype] arm-smmu-v3.c:229:20: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:229:20: expected restricted __le64 [usertype] *[assigned] dst arm-smmu-v3.c:229:20: got unsigned long long [usertype] *ent arm-smmu-v3.c:229:25: warning: incorrect type in argument 2 (different base types) arm-smmu-v3.c:229:25: expected unsigned long long [usertype] *[assigned] src arm-smmu-v3.c:229:25: got restricted __le64 [usertype] * arm-smmu-v3.c:396:20: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:396:20: expected restricted __le64 [usertype] *[assigned] dst arm-smmu-v3.c:396:20: got unsigned long long * arm-smmu-v3.c:396:25: warning: incorrect type in argument 2 (different base types) arm-smmu-v3.c:396:25: expected unsigned long long [usertype] *[assigned] src arm-smmu-v3.c:396:25: got restricted __le64 [usertype] * arm-smmu-v3.c:1349:32: warning: invalid assignment: |= arm-smmu-v3.c:1349:32: left side has type restricted __le64 arm-smmu-v3.c:1349:32: right side has type unsigned long arm-smmu-v3.c:1396:53: warning: incorrect type in argument 3 (different base types) arm-smmu-v3.c:1396:53: expected restricted __le64 [usertype] *dst arm-smmu-v3.c:1396:53: got unsigned long long [usertype] *strtab arm-smmu-v3.c:1424:39: warning: incorrect type in argument 1 (different base types) arm-smmu-v3.c:1424:39: expected unsigned long long [usertype] *[assigned] strtab arm-smmu-v3.c:1424:39: got restricted __le64 [usertype] *l2ptr While harmless, they are incorrect and could hide actual errors during development. Fix them. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200918141856.629722-1-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-09-07iommu/arm-smmu-v3: permit users to disable msi pollingBarry Song
Polling by MSI isn't necessarily faster than polling by SEV. Tests on hi1620 show hns3 100G NIC network throughput can improve from 25G to 27G if we disable MSI polling while running 16 netperf threads sending UDP packets in size 32KB. TX throughput can improve from 7G to 7.7G for single thread. The reason for the throughput improvement is that the latency to poll the completion of CMD_SYNC becomes smaller. After sending a CMD_SYNC in an empty cmd queue, typically we need to wait for 280ns using MSI polling. But we only need around 190ns after disabling MSI polling. This patch provides a command line option so that users can decide to use MSI polling or not based on their tests. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200827092957.22500-4-song.bao.hua@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-07iommu/arm-smmu-v3: replace module_param_named by module_param for disable_bypassBarry Song
Just use module_param() - going out of the way to specify a "different" name that's identical to the variable name is silly. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200827092957.22500-3-song.bao.hua@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-07iommu/arm-smmu-v3: replace symbolic permissions by octal permissions for ↵Barry Song
module parameter This fixed the below checkpatch issue: WARNING: Symbolic permissions 'S_IRUGO' are not preferred. Consider using octal permissions '0444'. 417: FILE: drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:417: module_param_named(disable_bypass, disable_bypass, bool, S_IRUGO); Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20200827092957.22500-2-song.bao.hua@hisilicon.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-07iommu/arm-smmu-v3: Fix l1 stream table size in the error messageZenghui Yu
The actual size of level-1 stream table is l1size. This looks like an oversight on commit d2e88e7c081ef ("iommu/arm-smmu: Fix LOG2SIZE setting for 2-level stream tables") which forgot to update the @size in error message as well. As memory allocation failure is already bad enough, nothing worse would happen. But let's be careful. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200826141758.341-1-yuzenghui@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-08-23treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
2020-07-29Merge tag 'arm-smmu-updates' of ↵Joerg Roedel
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into next More Arm SMMU updates for 5.9 - Move Arm SMMU driver files into their own subdirectory
2020-07-27iommu/arm-smmu: Move Arm SMMU drivers into their own subdirectoryWill Deacon
The Arm SMMU drivers are getting fat on vendor value-add, so move them to their own subdirectory out of the way of the other IOMMU drivers. Suggested-by: Joerg Roedel <joro@8bytes.org> Signed-off-by: Will Deacon <will@kernel.org>