summaryrefslogtreecommitdiff
path: root/drivers/iommu
AgeCommit message (Collapse)Author
2019-09-13iommu: pass cell_count = -1 to of_for_each_phandle with cells_nameUwe Kleine-König
Currently of_for_each_phandle ignores the cell_count parameter when a cells_name is given. I intend to change that and let the iterator fall back to a non-negative cell_count if the cells_name property is missing in the referenced node. To not change how existing of_for_each_phandle's users iterate, fix them to pass cell_count = -1 when also cells_name is given which yields the expected behaviour with and without my change. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Rob Herring <robh@kernel.org>
2019-09-11Merge branches 'arm/omap', 'arm/exynos', 'arm/smmu', 'arm/mediatek', ↵Joerg Roedel
'arm/qcom', 'arm/renesas', 'x86/amd', 'x86/vt-d' and 'core' into next
2019-09-11iommu/vt-d: Declare Broadwell igfx dmar support snafuChris Wilson
Despite the widespread and complete failure of Broadwell integrated graphics when DMAR is enabled, known over the years, we have never been able to root cause the issue. Instead, we let the failure undermine our confidence in the iommu system itself when we should be pushing for it to be always enabled. Quirk away Broadwell and remove the rotten apple. References: https://bugs.freedesktop.org/show_bug.cgi?id=89360 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Martin Peres <martin.peres@linux.intel.com> Cc: Joerg Roedel <joro@8bytes.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11iommu/vt-d: Add Scalable Mode fault informationKyung Min Park
Intel VT-d specification revision 3 added support for Scalable Mode Translation for DMA remapping. Add the Scalable Mode fault reasons to show detailed fault reasons when the translation fault happens. Link: https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdf Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Signed-off-by: Kyung Min Park <kyung.min.park@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11iommu/vt-d: Use bounce buffer for untrusted devicesLu Baolu
The Intel VT-d hardware uses paging for DMA remapping. The minimum mapped window is a page size. The device drivers may map buffers not filling the whole IOMMU window. This allows the device to access to possibly unrelated memory and a malicious device could exploit this to perform DMA attacks. To address this, the Intel IOMMU driver will use bounce pages for those buffers which don't fill whole IOMMU pages. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Xu Pengfei <pengfei.xu@intel.com> Tested-by: Mika Westerberg <mika.westerberg@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11iommu/vt-d: Add trace events for device dma map/unmapLu Baolu
This adds trace support for the Intel IOMMU driver. It also declares some events which could be used to trace the events when an IOVA is being mapped or unmapped in a domain. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11iommu/vt-d: Don't switch off swiotlb if bounce page is usedLu Baolu
The bounce page implementation depends on swiotlb. Hence, don't switch off swiotlb if the system has untrusted devices or could potentially be hot-added with any untrusted devices. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-11iommu/vt-d: Check whether device requires bounce bufferLu Baolu
This adds a helper to check whether a device needs to use bounce buffer. It also provides a boot time option to disable the bounce buffer. Users can use this to prevent the iommu driver from using the bounce buffer for performance gain. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Xu Pengfei <pengfei.xu@intel.com> Tested-by: Mika Westerberg <mika.westerberg@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06iommu/omap: Mark pm functions __maybe_unusedArnd Bergmann
The runtime_pm functions are unused when CONFIG_PM is disabled: drivers/iommu/omap-iommu.c:1022:12: error: unused function 'omap_iommu_runtime_suspend' [-Werror,-Wunused-function] static int omap_iommu_runtime_suspend(struct device *dev) drivers/iommu/omap-iommu.c:1064:12: error: unused function 'omap_iommu_runtime_resume' [-Werror,-Wunused-function] static int omap_iommu_runtime_resume(struct device *dev) Mark them as __maybe_unused to let gcc silently drop them instead of warning. Fixes: db8918f61d51 ("iommu/omap: streamline enable/disable through runtime pm callbacks") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Suman Anna <s-anna@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06iommu/amd: Fix race in increase_address_space()Joerg Roedel
After the conversion to lock-less dma-api call the increase_address_space() function can be called without any locking. Multiple CPUs could potentially race for increasing the address space, leading to invalid domain->mode settings and invalid page-tables. This has been happening in the wild under high IO load and memory pressure. Fix the race by locking this operation. The function is called infrequently so that this does not introduce a performance regression in the dma-api path again. Reported-by: Qian Cai <cai@lca.pw> Fixes: 256e4621c21a ('iommu/amd: Make use of the generic IOVA allocator') Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06iommu/amd: Flush old domains in kdump kernelStuart Hayes
When devices are attached to the amd_iommu in a kdump kernel, the old device table entries (DTEs), which were copied from the crashed kernel, will be overwritten with a new domain number. When the new DTE is written, the IOMMU is told to flush the DTE from its internal cache--but it is not told to flush the translation cache entries for the old domain number. Without this patch, AMD systems using the tg3 network driver fail when kdump tries to save the vmcore to a network system, showing network timeouts and (sometimes) IOMMU errors in the kernel log. This patch will flush IOMMU translation cache entries for the old domain when a DTE gets overwritten with a new domain number. Signed-off-by: Stuart Hayes <stuart.w.hayes@gmail.com> Fixes: 3ac3e5ee5ed5 ('iommu/amd: Copy old trans table from old kernel') Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-05iommu/ipmmu-vmsa: Disable cache snoop transactions on R-Car Gen3Hai Nguyen Pham
According to the Hardware Manual Errata for Rev. 1.50 of April 10, 2019, cache snoop transactions for page table walk requests are not supported on R-Car Gen3. Hence, this patch removes setting these fields in the IMTTBCR register, since it will have no effect, and adds comments to the register bit definitions, to make it clear they apply to R-Car Gen2 only. Signed-off-by: Hai Nguyen Pham <hai.pham.ud@renesas.com> [geert: Reword, add comments] Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-05iommu/ipmmu-vmsa: Move IMTTBCR_SL0_TWOBIT_* to restore sort orderGeert Uytterhoeven
Move the recently added IMTTBCR_SL0_TWOBIT_* definitions up, to make sure all IMTTBCR register bit definitions are sorted by decreasing bit index. Add comments to make it clear that they exist on R-Car Gen3 only. Fixes: c295f504fb5a38ab ("iommu/ipmmu-vmsa: Allow two bit SL0") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-04dma-mapping: introduce a dma_common_find_pages helperChristoph Hellwig
A helper to find the backing page array based on a virtual address. This also ensures we do the same vm_flags check everywhere instead of slightly different or missing ones in a few places. Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-04dma-mapping: always use VM_DMA_COHERENT for generic DMA remapChristoph Hellwig
Currently the generic dma remap allocator gets a vm_flags passed by the caller that is a little confusing. We just introduced a generic vmalloc-level flag to identify the dma coherent allocations, so use that everywhere and remove the now pointless argument. Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-04dma-mapping: explicitly wire up ->mmap and ->get_sgtableChristoph Hellwig
While the default ->mmap and ->get_sgtable implementations work for the majority of our dma_map_ops impementations they are inherently safe for others that don't use the page allocator or CMA and/or use their own way of remapping not covered by the common code. So remove the defaults if these methods are not wired up, but instead wire up the default implementations for all safe instances. Fixes: e1c7e324539a ("dma-mapping: always provide the dma_map_ops based implementation") Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-09-03iommu: Don't use sme_active() in generic codeJoerg Roedel
Switch to the generic function mem_encrypt_active() because sme_active() is x86 specific and can't be called from generic code on other platforms than x86. Fixes: 2cc13bb4f59f ("iommu: Disable passthrough mode when SME is active") Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-03iommu/vt-d: Remove global page flush supportJacob Pan
Global pages support is removed from VT-d spec 3.0. Since global pages G flag only affects first-level paging structures and because DMA request with PASID are only supported by VT-d spec. 3.0 and onward, we can safely remove global pages support. For kernel shared virtual address IOTLB invalidation, PASID granularity and page selective within PASID will be used. There is no global granularity supported. Without this fix, IOTLB invalidation will cause invalid descriptor error in the queued invalidation (QI) interface. Fixes: 1c4f88b7f1f9 ("iommu/vt-d: Shared virtual address in scalable mode") Reported-by: Sanjay K Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-03iommu/arm-smmu-v3: Fix build error without CONFIG_PCI_ATSYueHaibing
If CONFIG_PCI_ATS is not set, building fails: drivers/iommu/arm-smmu-v3.c: In function arm_smmu_ats_supported: drivers/iommu/arm-smmu-v3.c:2325:35: error: struct pci_dev has no member named ats_cap; did you mean msi_cap? return !pdev->untrusted && pdev->ats_cap; ^~~~~~~ ats_cap should only used when CONFIG_PCI_ATS is defined, so use #ifdef block to guard this. Fixes: bfff88ec1afe ("iommu/arm-smmu-v3: Rework enabling/disabling of ATS for PCI masters") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-03iommu/dma: add a new dma_map_ops of get_merge_boundary()Yoshihiro Shimoda
This patch adds a new dma_map_ops of get_merge_boundary() to expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2019-08-30iommu/qcom: Use struct_size() helperGustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct qcom_iommu_dev { ... struct qcom_iommu_ctx *ctxs[0]; /* indexed by asid-1 */ }; Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes. So, replace the following form: sizeof(*qcom_iommu) + (max_asid * sizeof(qcom_iommu->ctxs[0])) with: struct_size(qcom_iommu, ctxs, max_asid) Also, notice that, in this case, variable sz is not necessary, hence it is removed. This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu: Remove wrong default domain commentsTom Murphy
These comments are wrong. request_default_domain_for_dev doesn't just handle direct mapped domains. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30Revert "iommu/vt-d: Avoid duplicated pci dma alias consideration"Lu Baolu
This reverts commit 557529494d79f3f1fadd486dd18d2de0b19be4da. Commit 557529494d79f ("iommu/vt-d: Avoid duplicated pci dma alias consideration") aimed to address a NULL pointer deference issue happened when a thunderbolt device driver returned unexpectedly. Unfortunately, this change breaks a previous pci quirk added by commit cc346a4714a59 ("PCI: Add function 1 DMA alias quirk for Marvell devices"), as the result, devices like Marvell 88SE9128 SATA controller doesn't work anymore. We will continue to try to find the real culprit mentioned in 557529494d79f, but for now we should revert it to fix current breakage. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=204627 Cc: Stijn Tintel <stijn@linux-ipv6.be> Cc: Petr Vandrovec <petr@vandrovec.name> Reported-by: Stijn Tintel <stijn@linux-ipv6.be> Reported-by: Petr Vandrovec <petr@vandrovec.name> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/dma: Fix for dereferencing before null checkingYunsheng Lin
The cookie is dereferenced before null checking in the function iommu_dma_init_domain. This patch moves the dereferencing after the null checking. Fixes: fdbe574eb693 ("iommu/dma: Allow MSI-only cookies") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30Merge branch 'arm/smmu' into arm/mediatekJoerg Roedel
2019-08-30iommu/mediatek: Clean up struct mtk_smi_iommuYong Wu
Remove the "struct mtk_smi_iommu" to simplify the code since it has only one item in it right now. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30memory: mtk-smi: Get rid of need_larbidYong Wu
The "mediatek,larb-id" has already been parsed in MTK IOMMU driver. It's no need to parse it again in SMI driver. Only clean some codes. This patch is fit for all the current mt2701, mt2712, mt7623, mt8173 and mt8183. After this patch, the "mediatek,larb-id" only be needed for mt2712 which have 2 M4Us. In the other SoCs, we can get the larb-id from M4U in which the larbs in the "mediatek,larbs" always are ordered. Correspondingly, the larb_nr in the "struct mtk_smi_iommu" could also be deleted. CC: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Fix VLD_PA_RNG register backup when suspendYong Wu
The register VLD_PA_RNG(0x118) was forgot to backup while adding 4GB mode support for mt2712. this patch add it. Fixes: 30e2fccf9512 ("iommu/mediatek: Enlarge the validate PA range for 4GB mode") Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Add mmu1 supportYong Wu
Normally the M4U HW connect EMI with smi. the diagram is like below: EMI | M4U | smi-common | ----------------- | | | | ... larb0 larb1 larb2 larb3 Actually there are 2 mmu cells in the M4U HW, like this diagram: EMI --------- | | mmu0 mmu1 <- M4U | | --------- | smi-common | ----------------- | | | | ... larb0 larb1 larb2 larb3 This patch add support for mmu1. In order to get better performance, we could adjust some larbs go to mmu1 while the others still go to mmu0. This is controlled by a SMI COMMON register SMI_BUS_SEL(0x220). mt2712, mt8173 and mt8183 M4U HW all have 2 mmu cells. the default value of that register is 0 which means all the larbs go to mmu0 defaultly. This is a preparing patch for adjusting SMI_BUS_SEL for mt8183. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Add mt8183 IOMMU supportYong Wu
The M4U IP blocks in mt8183 is MediaTek's generation2 M4U which use the ARM Short-descriptor like mt8173, and most of the HW registers are the same. Here list main differences between mt8183 and mt8173/mt2712: 1) mt8183 has only one M4U HW like mt8173 while mt2712 has two. 2) mt8183 don't have the "bclk" clock, it use the EMI clock instead. 3) mt8183 can support the dram over 4GB, but it doesn't call this "4GB mode". 4) mt8183 pgtable base register(0x0) extend bit[1:0] which represent the bit[33:32] in the physical address of the pgtable base, But the standard ttbr0[1] means the S bit which is enabled defaultly, Hence, we add a mask. 5) mt8183 HW has a GALS modules, SMI should enable "has_gals" support. 6) mt8183 need reset_axi like mt8173. 7) the larb-id in smi-common is remapped. M4U should add its larbid_remap. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Move vld_pa_rng into plat_dataYong Wu
Both mt8173 and mt8183 don't have this vld_pa_rng(valid physical address range) register while mt2712 have. Move it into the plat_data. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Move reset_axi into plat_dataYong Wu
In mt8173 and mt8183, 0x48 is REG_MMU_STANDARD_AXI_MODE while it is REG_MMU_CTRL in the other SoCs, and the bits meaning is completely different with the REG_MMU_STANDARD_AXI_MODE. This patch moves this property to plat_data, it's also a preparing patch for mt8183. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Nicolas Boichat <drinkcat@chromium.org> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Refine protect memory definitionYong Wu
The protect memory setting is a little different in the different SoCs. In the register REG_MMU_CTRL_REG(0x110), the TF_PROT(translation fault protect) shift bit is normally 4 while it shift 5 bits only in the mt8173. This patch delete the complex MACRO and use a common if-else instead. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Add larb-id remapped supportYong Wu
The larb-id may be remapped in the smi-common, this means the larb-id reported in the mtk_iommu_isr isn't the real larb-id, Take mt8183 as a example: M4U | --------------------------------------------- | SMI common | -0-----7-----5-----6-----1-----2------3-----4- <- Id remapped | | | | | | | | larb0 larb1 IPU0 IPU1 larb4 larb5 larb6 CCU disp vdec img cam venc img cam As above, larb0 connects with the id 0 in smi-common. larb1 connects with the id 7 in smi-common. ... If the larb-id reported in the isr is 7, actually it's larb1(vdec). In order to output the right larb-id in the isr, we add a larb-id remapping relationship in this patch. If there is no this larb-id remapping in some SoCs, use the linear mapping array instead. This also is a preparing patch for mt8183. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Nicolas Boichat <drinkcat@chromium.org> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Add bclk can be supported optionallyYong Wu
In some SoCs, M4U doesn't have its "bclk", it will use the EMI clock instead which has always been enabled when entering kernel. Currently mt2712 and mt8173 have this bclk while mt8183 doesn't. This also is a preparing patch for mt8183. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Evan Green <evgreen@chromium.org> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Adjust the PA for the 4GB ModeYong Wu
After extending the v7s support PA[33:32] for MediaTek, we have to adjust the PA ourself for the 4GB mode. In the 4GB Mode, the PA will remap like this: CPU PA -> M4U output PA 0x4000_0000 0x1_4000_0000 (Add bit32) 0x8000_0000 0x1_8000_0000 ... 0xc000_0000 0x1_c000_0000 ... 0x1_0000_0000 0x1_0000_0000 (No change) 1) Always add bit32 for CPU PA in ->map. 2) Discard the bit32 in iova_to_phys if PA > 0x1_4000_0000 since the iommu consumer always use the CPU PA. Besides, the "oas" always is set to 34 since v7s has already supported our case. Both mt2712 and mt8173 support this "4GB mode" while the mt8183 don't. The PA in mt8183 won't remap. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/io-pgtable-arm-v7s: Extend to support PA[33:32] for MediaTekYong Wu
MediaTek extend the arm v7s descriptor to support up to 34 bits PA where the bit32 and bit33 are encoded in the bit9 and bit4 of the PTE respectively. Meanwhile the iova still is 32bits. Regarding whether the pagetable address could be over 4GB, the mt8183 support it while the previous mt8173 don't, thus keep it as is. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/io-pgtable-arm-v7s: Rename the quirk from MTK_4GB to MTK_EXTYong Wu
In previous mt2712/mt8173, MediaTek extend the v7s to support 4GB dram. But in the latest mt8183, We extend it to support the PA up to 34bit. Then the "MTK_4GB" name is not so fit, This patch only change the quirk name to "MTK_EXT". Signed-off-by: Yong Wu <yong.wu@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/io-pgtable-arm-v7s: Use ias/oas to check the valid iova/paYong Wu
Use ias/oas to check the valid iova/pa. Synchronize this checking with io-pgtable-arm.c. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/io-pgtable-arm-v7s: Add paddr_to_iopte and iopte_to_paddr helpersYong Wu
Add two helper functions: paddr_to_iopte and iopte_to_paddr. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Evan Green <evgreen@chromium.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Fix iova_to_phys PA start for 4GB modeYong Wu
In M4U 4GB mode, the physical address is remapped as below: CPU Physical address: ==================== 0 1G 2G 3G 4G 5G |---A---|---B---|---C---|---D---|---E---| +--I/O--+------------Memory-------------+ IOMMU output physical address: ============================= 4G 5G 6G 7G 8G |---E---|---B---|---C---|---D---| +------------Memory-------------+ The Region 'A'(I/O) can not be mapped by M4U; For Region 'B'/'C'/'D', the bit32 of the CPU physical address always is needed to set, and for Region 'E', the CPU physical address keep as is. something looks like this: CPU PA -> M4U OUTPUT PA 0x4000_0000 0x1_4000_0000 (Add bit32) 0x8000_0000 0x1_8000_0000 ... 0xc000_0000 0x1_c000_0000 ... 0x1_0000_0000 0x1_0000_0000 (No change) Additionally, the iommu consumers always use the CPU phyiscal address. The PA in the iova_to_phys that is got from v7s always is u32, But from the CPU point of view, PA only need add BIT(32) when PA < 0x4000_0000. Fixes: 30e2fccf9512 ("iommu/mediatek: Enlarge the validate PA range for 4GB mode") Signed-off-by: Yong Wu <yong.wu@mediatek.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/mediatek: Use a struct as the platform dataYong Wu
Use a struct as the platform special data instead of the enumeration. This is a prepare patch for adding mt8183 iommu support. Signed-off-by: Yong Wu <yong.wu@mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Reviewed-by: Evan Green <evgreen@chromium.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu: Revisit iommu_insert_resv_region() implementationEric Auger
Current implementation is recursive and in case of allocation failure the existing @regions list is altered. A non recursive version looks better for maintainability and simplifies the error handling. We use a separate stack for overlapping segment merging. The elements are sorted by start address and then by type, if their start address match. Note this new implementation may change the region order of appearance in /sys/kernel/iommu_groups/<n>/reserved_regions files but this order has never been documented, see commit bc7d12b91bd3 ("iommu: Implement reserved_regions iommu-group sysfs file"). Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/vt-d: Fix wrong analysis whether devices share the same busNadav Amit
set_msi_sid_cb() is used to determine whether device aliases share the same bus, but it can provide false indications that aliases use the same bus when in fact they do not. The reason is that set_msi_sid_cb() assumes that pdev is fixed, while actually pci_for_each_dma_alias() can call fn() when pdev is set to a subordinate device. As a result, running an VM on ESX with VT-d emulation enabled can results in the log warning such as: DMAR: [INTR-REMAP] Request device [00:11.0] fault index 3b [fault reason 38] Blocked an interrupt request due to source-id verification failure This seems to cause additional ata errors such as: ata3.00: qc timeout (cmd 0xa1) ata3.00: failed to IDENTIFY (I/O error, err_mask=0x4) These timeouts also cause boot to be much longer and other errors. Fix it by checking comparing the alias with the previous one instead. Fixes: 3f0c625c6ae71 ("iommu/vt-d: Allow interrupts from the entire bus for aliased devices") Cc: stable@vger.kernel.org Cc: Logan Gunthorpe <logang@deltatee.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Nadav Amit <namit@vmware.com> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/iova: Avoid false sharing on fq_timer_onEric Dumazet
In commit 14bd9a607f90 ("iommu/iova: Separate atomic variables to improve performance") Jinyu Qi identified that the atomic_cmpxchg() in queue_iova() was causing a performance loss and moved critical fields so that the false sharing would not impact them. However, avoiding the false sharing in the first place seems easy. We should attempt the atomic_cmpxchg() no more than 100 times per second. Adding an atomic_read() will keep the cache line mostly shared. This false sharing came with commit 9a005a800ae8 ("iommu/iova: Add flush timer"). Signed-off-by: Eric Dumazet <edumazet@google.com> Fixes: 9a005a800ae8 ('iommu/iova: Add flush timer') Cc: Jinyu Qi <jinyuqi@huawei.com> Cc: Joerg Roedel <jroedel@suse.de> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-30iommu/amd: Silence warnings under memory pressureQian Cai
When running heavy memory pressure workloads, the system is throwing endless warnings, smartpqi 0000:23:00.0: AMD-Vi: IOMMU mapping error in map_sg (io-pages: 5 reason: -12) Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019 swapper/10: page allocation failure: order:0, mode:0xa20(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0,4 Call Trace: <IRQ> dump_stack+0x62/0x9a warn_alloc.cold.43+0x8a/0x148 __alloc_pages_nodemask+0x1a5c/0x1bb0 get_zeroed_page+0x16/0x20 iommu_map_page+0x477/0x540 map_sg+0x1ce/0x2f0 scsi_dma_map+0xc6/0x160 pqi_raid_submit_scsi_cmd_with_io_request+0x1c3/0x470 [smartpqi] do_IRQ+0x81/0x170 common_interrupt+0xf/0xf </IRQ> because the allocation could fail from iommu_map_page(), and the volume of this call could be huge which may generate a lot of serial console output and cosumes all CPUs. Fix it by silencing the warning in this call site, and there is still a dev_err() later to notify the failure. Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23Merge branch 'for-joerg/arm-smmu/updates' of ↵Joerg Roedel
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu
2019-08-23Merge branches 'for-joerg/arm-smmu/smmu-v2' and 'for-joerg/arm-smmu/smmu-v3' ↵Will Deacon
into for-joerg/arm-smmu/updates * for-joerg/arm-smmu/smmu-v2: Refactoring to allow for implementation-specific hooks in 'arm-smmu-impl.c' * for-joerg/arm-smmu/smmu-v3: Support for deferred TLB invalidation and batching of commands Rework ATC invalidation for ATS-enabled PCIe masters
2019-08-23iommu/amd: Override wrong IVRS IOAPIC on Raven Ridge systemsKai-Heng Feng
Raven Ridge systems may have malfunction touchpad or hang at boot if incorrect IVRS IOAPIC is provided by BIOS. Users already found correct "ivrs_ioapic=" values, let's put them inside kernel to workaround buggy BIOS. BugLink: https://bugs.launchpad.net/bugs/1795292 BugLink: https://bugs.launchpad.net/bugs/1837688 Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-08-23iommu: Disable passthrough mode when SME is activeJoerg Roedel
Using Passthrough mode when SME is active causes certain devices to use the SWIOTLB bounce buffer. The bounce buffer code has an upper limit of 256kb for the size of DMA allocations, which is too small for certain devices and causes them to fail. With this patch we enable IOMMU by default when SME is active in the system, making the default configuration work for more systems than it does now. Users that don't want IOMMUs to be enabled still can disable them with kernel parameters. Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>