summaryrefslogtreecommitdiff
path: root/drivers/xen/swiotlb-xen.c
AgeCommit message (Collapse)Author
2021-05-14xen/swiotlb: check if the swiotlb has already been initializedStefano Stabellini
xen_swiotlb_init calls swiotlb_late_init_with_tbl, which fails with -ENOMEM if the swiotlb has already been initialized. Add an explicit check io_tlb_default_mem != NULL at the beginning of xen_swiotlb_init. If the swiotlb is already initialized print a warning and return -EEXIST. On x86, the error propagates. On ARM, we don't actually need a special swiotlb buffer (yet), any buffer would do. So ignore the error and continue. CC: boris.ostrovsky@oracle.com CC: jgross@suse.com Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210512201823.1963-3-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2021-03-19swiotlb: dynamically allocate io_tlb_default_memChristoph Hellwig
Instead of allocating ->list and ->orig_addr separately just do one dynamic allocation for the actual io_tlb_mem structure. This simplifies a lot of the initialization code, and also allows to just check io_tlb_default_mem to see if swiotlb is in use. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-19swiotlb: move global variables into a new io_tlb_mem structureClaire Chang
Added a new struct, io_tlb_mem, as the IO TLB memory pool descriptor and moved relevant global variables into that struct. This will be useful later to allow for restricted DMA pool. Signed-off-by: Claire Chang <tientzu@chromium.org> [hch: rebased] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: remove the unused size argument from xen_swiotlb_fixupChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: split xen_swiotlb_initChristoph Hellwig
Split xen_swiotlb_init into a normal an an early case. That makes both much simpler and more readable, and also allows marking the early code as __init and x86-only. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: lift the double initialization protection from xen-swiotlbChristoph Hellwig
Lift the double initialization protection from xen-swiotlb to the core code to avoid exposing too many swiotlb internals. Also upgrade the check to a warning as it should not happen. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: remove xen_io_tlb_start and xen_io_tlb_nslabsChristoph Hellwig
The xen_io_tlb_start and xen_io_tlb_nslabs variables are now only used in xen_swiotlb_init, so replace them with local variables. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: remove xen_set_nslabsChristoph Hellwig
The xen_set_nslabs function is a little weird, as it has just one caller, that caller passes a global variable as the argument, which is then overriden in the function and a derivative of it returned. Just add a cpp symbol for the default size using a readable constant and open code the remaining three lines in the caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: use io_tlb_end in xen_swiotlb_dma_supportedChristoph Hellwig
Use the existing variable that holds the physical address for xen_io_tlb_end to simplify xen_swiotlb_dma_supported a bit, and remove the otherwise unused xen_io_tlb_end variable and the xen_virt_to_bus helper. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17xen-swiotlb: use is_swiotlb_buffer in is_xen_swiotlb_bufferChristoph Hellwig
Use the is_swiotlb_buffer to check if a physical address is a swiotlb buffer. This works because xen-swiotlb does use the same buffer as the main swiotlb code, and xen_io_tlb_{start,end} are just the addresses for it that went through phys_to_virt. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: split swiotlb_tbl_sync_singleChristoph Hellwig
Split swiotlb_tbl_sync_single into two separate funtions for the to device and to cpu synchronization. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2021-03-17swiotlb: remove the alloc_size parameter to swiotlb_tbl_unmap_singleChristoph Hellwig
Now that swiotlb remembers the allocation size there is no need to pass it back to swiotlb_tbl_unmap_single. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2020-11-02swiotlb: remove the tbl_dma_addr argument to swiotlb_tbl_map_singleChristoph Hellwig
The tbl_dma_addr argument is used to check the DMA boundary for the allocations, and thus needs to be a dma_addr_t. swiotlb-xen instead passed a physical address, which could lead to incorrect results for strange offsets. Fix this by removing the parameter entirely and hard code the DMA address for io_tlb_start instead. Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2020-10-06dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>Christoph Hellwig
Move more nitty gritty DMA implementation details into the common internal header. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-10-06dma-mapping: split <linux/dma-mapping.h>Christoph Hellwig
Split out all the bits that are purely for dma_map_ops implementations and related code into a new <linux/dma-map-ops.h> header so that they don't get pulled into all the drivers. That also means the architecture specific <asm/dma-mapping.h> is not pulled in by <linux/dma-mapping.h> any more, which leads to a missing includes that were pulled in by the x86 or arm versions in a few not overly portable drivers. Signed-off-by: Christoph Hellwig <hch@lst.de>
2020-09-25dma-mapping: add a new dma_alloc_pages APIChristoph Hellwig
This API is the equivalent of alloc_pages, except that the returned memory is guaranteed to be DMA addressable by the passed in device. The implementation will also be used to provide a more sensible replacement for DMA_ATTR_NON_CONSISTENT flag. Additionally dma_alloc_noncoherent is switched over to use dma_alloc_pages as its backend. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> (MIPS part)
2020-08-04xen/arm: introduce phys/dma translations in xen_dma_sync_for_*Stefano Stabellini
xen_dma_sync_for_cpu, xen_dma_sync_for_device, xen_arch_need_swiotlb are getting called passing dma addresses. On some platforms dma addresses could be different from physical addresses. Before doing any operations on these addresses we need to convert them back to physical addresses using dma_to_phys. Move the arch_sync_dma_for_cpu and arch_sync_dma_for_device calls from xen_dma_sync_for_cpu/device to swiotlb-xen.c, and add a call dma_to_phys to do address translations there. dma_cache_maint is fixed by the next patch. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Acked-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200710223427.6897-10-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: introduce phys_to_dma/dma_to_phys translationsStefano Stabellini
With some devices physical addresses are different than dma addresses. To be able to deal with these cases, we need to call phys_to_dma on physical addresses (including machine addresses in Xen terminology) before returning them from xen_swiotlb_alloc_coherent and xen_swiotlb_map_page. We also need to convert dma addresses back to physical addresses using dma_to_phys in xen_swiotlb_free_coherent and xen_swiotlb_unmap_page if we want to do any operations on them. Call dma_to_phys in is_xen_swiotlb_buffer. Introduce xen_phys_to_dma and call phys_to_dma in its implementation. Introduce xen_dma_to_phys and call dma_to_phys in its implementation. Call xen_phys_to_dma/xen_dma_to_phys instead of xen_phys_to_bus/xen_bus_to_phys through swiotlb-xen.c. Everything is taken care of by these changes except for xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a few explicit phys_to_dma/dma_to_phys calls. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200710223427.6897-9-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: remove XEN_PFN_PHYSStefano Stabellini
XEN_PFN_PHYS is only used in one place in swiotlb-xen making things more complex than need to be. Remove the definition of XEN_PFN_PHYS and open code the cast in the one place where it is needed. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Juergen Gross <jgross@suse.com> Link: https://lore.kernel.org/r/20200710223427.6897-8-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to is_xen_swiotlb_bufferStefano Stabellini
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-7-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_dma_sync_for_deviceStefano Stabellini
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-6-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_dma_sync_for_cpuStefano Stabellini
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-5-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_bus_to_physStefano Stabellini
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-4-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: add struct device * parameter to xen_phys_to_busStefano Stabellini
No functional changes. The parameter is unused in this patch but will be used by next patches. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-3-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: remove start_dma_addrStefano Stabellini
It is not strictly needed. Call virt_to_phys on xen_io_tlb_start instead. It will be useful not to have a start_dma_addr around with the next patches. Note that virt_to_phys is not the same as xen_virt_to_bus but actually it is used to compared again __pa(xen_io_tlb_start) as passed to swiotlb_init_with_tbl, so virt_to_phys is actually what we want. Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-2-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2020-08-04swiotlb-xen: use vmalloc_to_page on vmalloc virt addressesBoris Ostrovsky
xen_alloc_coherent_pages might return pages for which virt_to_phys and virt_to_page don't work, e.g. ioremap'ed pages. So in xen_swiotlb_free_coherent we can't assume that virt_to_page works. Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc virt addresses. This patch fixes the following crash at boot on RPi4 (the underlying issue is not RPi4 specific): https://marc.info/?l=xen-devel&m=158862573216800 Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Corey Minyard <cminyard@mvista.com> Tested-by: Roman Shaposhnik <roman@zededa.com> Link: https://lore.kernel.org/r/20200710223427.6897-1-sstabellini@kernel.org Signed-off-by: Juergen Gross <jgross@suse.com>
2019-11-20dma-direct: exclude dma_direct_map_resource from the min_low_pfn checkChristoph Hellwig
The valid memory address check in dma_capable only makes sense when mapping normal memory, not when using dma_map_resource to map a device resource. Add a new boolean argument to dma_capable to exclude that check for the dma_map_resource case. Fixes: b12d66278dd6 ("dma-direct: check for overflows on 32 bit DMA addresses") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
2019-11-20dma-mapping: drop the dev argument to arch_sync_dma_for_*Christoph Hellwig
These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: Christoph Hellwig <hch@lst.de> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2019-09-26Merge tag 'for-linus-5.4-rc1-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen update from Juergen Gross: "Only two small patches this time: - a small cleanup for swiotlb-xen - a fix for PCI initialization for some platforms" * tag 'for-linus-5.4-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/pci: reserve MCFG areas earlier swiotlb-xen: Convert to use macro
2019-09-19Merge tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds
Pull dma-mapping updates from Christoph Hellwig: - add dma-mapping and block layer helpers to take care of IOMMU merging for mmc plus subsequent fixups (Yoshihiro Shimoda) - rework handling of the pgprot bits for remapping (me) - take care of the dma direct infrastructure for swiotlb-xen (me) - improve the dma noncoherent remapping infrastructure (me) - better defaults for ->mmap, ->get_sgtable and ->get_required_mask (me) - cleanup mmaping of coherent DMA allocations (me) - various misc cleanups (Andy Shevchenko, me) * tag 'dma-mapping-5.4' of git://git.infradead.org/users/hch/dma-mapping: (41 commits) mmc: renesas_sdhi_internal_dmac: Add MMC_CAP2_MERGE_CAPABLE mmc: queue: Fix bigger segments usage arm64: use asm-generic/dma-mapping.h swiotlb-xen: merge xen_unmap_single into xen_swiotlb_unmap_page swiotlb-xen: simplify cache maintainance swiotlb-xen: use the same foreign page check everywhere swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable xen: remove the exports for xen_{create,destroy}_contiguous_region xen/arm: remove xen_dma_ops xen/arm: simplify dma_cache_maint xen/arm: use dev_is_dma_coherent xen/arm: consolidate page-coherent.h xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance arm: remove wrappers for the generic dma remap helpers dma-mapping: introduce a dma_common_find_pages helper dma-mapping: always use VM_DMA_COHERENT for generic DMA remap vmalloc: lift the arm flag for coherent mappings to common code dma-mapping: provide a better default ->get_required_mask dma-mapping: remove the dma_declare_coherent_memory export remoteproc: don't allow modular build ...
2019-09-11swiotlb-xen: merge xen_unmap_single into xen_swiotlb_unmap_pageChristoph Hellwig
No need for a no-op wrapper. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2019-09-11swiotlb-xen: simplify cache maintainanceChristoph Hellwig
Now that we know we always have the dma-noncoherent.h helpers available if we are on an architecture with support for non-coherent devices, we can just call them directly, and remove the calls to the dma-direct routines, including the fact that we call the dma_direct_map_page routines but ignore the value returned from it. Instead we now have Xen wrappers for the arch_sync_dma_for_{device,cpu} helpers that call the special Xen versions of those routines for foreign pages. Note that the new helpers get the physical address passed in addition to the dma address to avoid another translation for the local cache maintainance. The pfn_valid checks remain on the dma address as in the old code, even if that looks a little funny. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2019-09-11swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtableChristoph Hellwig
There is no need to wrap the common version, just wire them up directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2019-09-11xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainanceChristoph Hellwig
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA on-coherent devices. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2019-09-11swiotlb: Split size parameter to map/unmap APIsLu Baolu
This splits the size parameter to swiotlb_tbl_map_single() and swiotlb_tbl_unmap_single() into an alloc_size and a mapping_size parameter, where the latter one is rounded up to the iommu page size. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-09-06swiotlb-xen: Convert to use macroSouptick Joarder
Rather than using static int max_dma_bits, this can be coverted to use as macro. Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2019-08-02Merge tag 'for-linus-5.3a-rc3-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: - a small cleanup - a fix for a build error on ARM with some configs - a fix of a patch for the Xen gntdev driver - three patches for fixing a potential problem in the swiotlb-xen driver which Konrad was fine with me carrying them through the Xen tree * tag 'for-linus-5.3a-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/swiotlb: remember having called xen_create_contiguous_region() xen/swiotlb: simplify range_straddles_page_boundary() xen/swiotlb: fix condition for calling xen_destroy_contiguous_region() xen: avoid link error on ARM xen/gntdev.c: Replace vm_map_pages() with vm_map_pages_zero() xen/pciback: remove set but not used variable 'old_state'
2019-08-01xen/swiotlb: remember having called xen_create_contiguous_region()Juergen Gross
Instead of always calling xen_destroy_contiguous_region() in case the memory is DMA-able for the used device, do so only in case it has been made DMA-able via xen_create_contiguous_region() before. This will avoid a lot of xen_destroy_contiguous_region() calls for 64-bit capable devices. As the memory in question is owned by swiotlb-xen the PG_owner_priv_1 flag of the first allocated page can be used for remembering. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2019-08-01xen/swiotlb: simplify range_straddles_page_boundary()Juergen Gross
range_straddles_page_boundary() is open coding several macros from include/xen/page.h. Use those instead. Additionally there is no need to have check_pages_physically_contiguous() as a separate function as it is used only once, so merge it into range_straddles_page_boundary(). Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2019-08-01xen/swiotlb: fix condition for calling xen_destroy_contiguous_region()Juergen Gross
The condition in xen_swiotlb_free_coherent() for deciding whether to call xen_destroy_contiguous_region() is wrong: in case the region to be freed is not contiguous calling xen_destroy_contiguous_region() is the wrong thing to do: it would result in inconsistent mappings of multiple PFNs to the same MFN. This will lead to various strange crashes or data corruption. Instead of calling xen_destroy_contiguous_region() in that case a warning should be issued as that situation should never occur. Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2019-07-18Merge branch 'for-linus-5.2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "One compiler fix, and a bug-fix in swiotlb_nr_tbl() and swiotlb_max_segment() to check also for no_iotlb_memory" * 'for-linus-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb: fix phys_addr_t overflow warning swiotlb: Return consistent SWIOTLB segments/nr_tbl swiotlb: Group identical cleanup in swiotlb_cleanup()
2019-06-19swiotlb: fix phys_addr_t overflow warningArnd Bergmann
On architectures that have a larger dma_addr_t than phys_addr_t, the swiotlb_tbl_map_single() function truncates its return code in the failure path, making it impossible to identify the error later, as we compare to the original value: kernel/dma/swiotlb.c:551:9: error: implicit conversion from 'dma_addr_t' (aka 'unsigned long long') to 'phys_addr_t' (aka 'unsigned int') changes value from 18446744073709551615 to 4294967295 [-Werror,-Wconstant-conversion] return DMA_MAPPING_ERROR; Use an explicit typecast here to convert it to the narrower type, and use the same expression in the error handling later. Fixes: b907e20508d0 ("swiotlb: remove SWIOTLB_MAP_ERROR") Acked-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2019-06-11Merge branch 'stable/for-linus-5.2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb fix from Konrad Rzeszutek Wilk: "One tiny fix for ARM64 where we could allocate the SWIOTLB twice" * 'stable/for-linus-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: xen/swiotlb: don't initialize swiotlb twice on arm64
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 287Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license v2 0 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 23 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141901.115786599@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-05xen/swiotlb: don't initialize swiotlb twice on arm64Stefano Stabellini
On arm64 swiotlb is often (not always) already initialized by mem_init. We don't want to initialize it twice, which would trigger a second memory allocation. Moreover, the second memory pool is typically made of high pages and ends up replacing the original memory pool of low pages. As a side effect of this change, it is possible to have low pages in swiotlb-xen on arm64. Signed-off-by: Stefano Stabellini <stefanos@xilinx.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2019-05-07Merge branch 'stable/for-linus-5.2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "Cleanups in the swiotlb code and extra debugfs knobs to help with the field diagnostics" * 'stable/for-linus-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb-xen: ensure we have a single callsite for xen_dma_map_page swiotlb-xen: simplify the DMA sync method implementations swiotlb-xen: use ->map_page to implement ->map_sg swiotlb-xen: make instances match their method names swiotlb: save io_tlb_used to local variable before leaving critical section swiotlb: dump used and total slots when swiotlb buffer is full
2019-05-02swiotlb-xen: ensure we have a single callsite for xen_dma_map_pageChristoph Hellwig
Refactor the code a bit to make further changes easier. Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2019-05-02swiotlb-xen: simplify the DMA sync method implementationsChristoph Hellwig
Get rid of the grand multiplexer and implement the sync_single_for_cpu and sync_single_for_device methods directly, and then loop over them for the scatterlist based variants. Note that this also loses a few comments related to highlevel DMA API concepts, which have nothing to do with the swiotlb-xen implementation details. Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2019-05-02swiotlb-xen: use ->map_page to implement ->map_sgChristoph Hellwig
We can simply loop over the segments and map them, removing lots of duplicate code. Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2019-05-02swiotlb-xen: make instances match their method namesChristoph Hellwig
Just drop two pointless _attrs prefixes to make the code a little more grep-able. Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>