diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-01 12:53:43 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-07-01 12:53:43 -0700 |
commit | e058a84bfddc42ba356a2316f2cf1141974625c9 (patch) | |
tree | e6a02dd913e83f44ea9f5a779f9b9bd56d06a9e3 /drivers/gpu/drm/i915/gem | |
parent | c288d9cd710433e5991d58a0764c4d08a933b871 (diff) | |
parent | 8a02ea42bc1d4c448caf1bab0e05899dad503f74 (diff) |
Merge tag 'drm-next-2021-07-01' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"Highlights:
- AMD enables two more GPUs, with resulting header files
- i915 has started to move to TTM for discrete GPU and enable DG1
discrete GPU support (not by default yet)
- new HyperV drm driver
- vmwgfx adds arm64 support
- TTM refactoring ongoing
- 16bpc display support for AMD hw
Otherwise it's just the usual insane amounts of work all over the
place in lots of drivers and the core, as mostly summarised below:
Core:
- mark AGP ioctls as legacy
- disable force probing for non-master clients
- HDR metadata property helpers
- HDMI infoframe signal colorimetry support
- remove drm_device.pdev pointer
- remove DRM_KMS_FB_HELPER config option
- remove drm_pci_alloc/free
- drm_err_*/drm_dbg_* helpers
- use drm driver names for fbdev
- leaked DMA handle fix
- 16bpc fixed point format fourcc
- add prefetching memcpy for WC
- Documentation fixes
aperture:
- add aperture ownership helpers
dp:
- aux fixes
- downstream 0 port handling
- use extended base receiver capability DPCD
- Rename DP_PSR_SELECTIVE_UPDATE to better mach eDP spec
- mst: use khz as link rate during init
- VCPI fixes for StarTech hub
ttm:
- provide tt_shrink file via debugfs
- warn about freeing pinned BOs
- fix swapping error handling
- move page alignment into BO
- cleanup ttm_agp_backend
- add ttm_sys_manager
- don't override vm_ops
- ttm_bo_mmap removed
- make ttm_resource base of all managers
- remove VM_MIXEDMAP usage
panel:
- sysfs_emit support
- simple: runtime PM support
- simple: power up panel when reading EDID + caching
bridge:
- MHDP8546: HDCP support + DT bindings
- MHDP8546: Register DP AUX channel with userspace
- TI SN65DSI83 + SN65DSI84: add driver
- Sil8620: Fix module dependencies
- dw-hdmi: make CEC driver loading optional
- Ti-sn65dsi86: refclk fixes, subdrivers, runtime pm
- It66121: Add driver + DT bindings
- Adv7511: Support I2S IEC958 encoding
- Anx7625: fix power-on delay
- Nwi-dsi: Modesetting fixes; Cleanups
- lt6911: add missing MODULE_DEVICE_TABLE
- cdns: fix PM reference leak
hyperv:
- add new DRM driver for HyperV graphics
efifb:
- non-PCI device handling fixes
i915:
- refactor IP/device versioning
- XeLPD Display IP preperation work
- ADL-P enablement patches
- DG1 uAPI behind BROKEN
- disable mmap ioctl for discerte GPUs
- start enabling HuC loading for Gen12+
- major GuC backend rework for new platforms
- initial TTM support for Discrete GPUs
- locking rework for TTM prep
- use correct max source link rate for eDP
- %p4cc format printing
- GLK display fixes
- VLV DSI panel power fixes
- PSR2 disabled for RKL and ADL-S
- ACPI _DSM invalid access fixed
- DMC FW path abstraction
- ADL-S PCI ID update
- uAPI headers converted to kerneldoc
- initial LMEM support for DG1
- x86/gpu: add Jasperlake to gen11 early quirks
amdgpu:
- Aldebaran updates + initial SR-IOV
- new GPU: Beige Goby and Yellow Carp support
- more LTTPR display work
- Vangogh updates
- SDMA 5.x GCR fixes
- PCIe ASPM support
- Renoir TMZ enablement
- initial multiple eDP panel support
- use fdinfo to track devices/process info
- pin/unpin TTM fixes
- free resource on fence usage query
- fix fence calculation
- fix hotunplug/suspend issues
- GC/MM register access macro cleanup for SR-IOV
- W=1 fixes
- ACPI ATCS/ATIF handling rework
- 16bpc fixed point format support
- Initial smartshift support
- RV/PCO power tuning fixes
- new INFO query for additional vbios info
amdkfd:
- SR-IOV aldebaran support
- HMM SVM support
radeon:
- SMU regression fixes
- Oland flickering fix
vmwgfx:
- enable console with fbdev emulation
- fix cpu updates of coherent multisample surfaces
- remove reservation semaphore
- add initial SVGA3 support
- support arm64
msm:
- devcoredump support for display errors
- dpu/dsi: yaml bindings conversion
- mdp5: alpha/blend_mode/zpos support
- a6xx: cached coherent buffer support
- gpu iova fault improvement
- a660 support
rockchip:
- RK3036 win1 scaling support
- RK3066/3188 missing register support
- RK3036/3066/3126/3188 alpha support
mediatek:
- MT8167 HDMI support
- MT8183 DPI dual edge support
tegra:
- fixed YUV support/scaling on Tegra186+
ast:
- use pcim_iomap
- fix DP501 EDID
bochs:
- screen blanking support
etnaviv:
- export more GPU ID values to userspace
- add HWDB entry for GPU on i.MX8MP
- rework linear window calcs
exynos:
- pm runtime changes
imx:
- Annotate dma_fence critical section
- fix PRG modifiers after drmm conversion
- Add 8 pixel alignment fix for 1366x768
- fix YUV advertising
- add color properties
ingenic:
- IPU planes fix
panfrost:
- Mediatek MT8183 support + DT bindings
- export AFBC_FEATURES register to userspace
simpledrm:
- %pr for printing resources
nouveau:
- pin/unpin TTM fixes
qxl:
- unpin shadow BO
virtio:
- create dumb BOs as guest blob
vkms:
- drmm_universal_plane_alloc
- add XRGB plane composition
- overlay support"
* tag 'drm-next-2021-07-01' of git://anongit.freedesktop.org/drm/drm: (1570 commits)
drm/i915: Reinstate the mmap ioctl for some platforms
drm/i915/dsc: abstract helpers to get bigjoiner primary/secondary crtc
Revert "drm/msm/mdp5: provide dynamic bandwidth management"
drm/msm/mdp5: provide dynamic bandwidth management
drm/msm/mdp5: add perf blocks for holding fudge factors
drm/msm/mdp5: switch to standard zpos property
drm/msm/mdp5: add support for alpha/blend_mode properties
drm/msm/mdp5: use drm_plane_state for pixel blend mode
drm/msm/mdp5: use drm_plane_state for storing alpha value
drm/msm/mdp5: use drm atomic helpers to handle base drm plane state
drm/msm/dsi: do not enable PHYs when called for the slave DSI interface
drm/msm: Add debugfs to trigger shrinker
drm/msm/dpu: Avoid ABBA deadlock between IRQ modules
drm/msm: devcoredump iommu fault support
iommu/arm-smmu-qcom: Add stall support
drm/msm: Improve the a6xx page fault handler
iommu/arm-smmu-qcom: Add an adreno-smmu-priv callback to get pagefault info
iommu/arm-smmu: Add support for driver IOMMU fault handlers
drm/msm: export hangcheck_period in debugfs
drm/msm/a6xx: add support for Adreno 660 GPU
...
Diffstat (limited to 'drivers/gpu/drm/i915/gem')
29 files changed, 716 insertions, 263 deletions
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c index 25235ef630c1..6234e17259c1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, * Alternatively, we can trade that extra information on read/write * activity with * args->busy = - * !dma_resv_test_signaled_rcu(obj->resv, true); + * !dma_resv_test_signaled(obj->resv, true); * to report the overall busyness. This is what the wait-ioctl does. * */ @@ -113,11 +113,10 @@ retry: seq = raw_read_seqcount(&obj->base.resv->seq); /* Translate the exclusive fence to the READ *and* WRITE engine */ - args->busy = - busy_check_writer(rcu_dereference(obj->base.resv->fence_excl)); + args->busy = busy_check_writer(dma_resv_excl_fence(obj->base.resv)); /* Translate shared fences to READ set of engines */ - list = rcu_dereference(obj->base.resv->fence); + list = dma_resv_shared_list(obj->base.resv); if (list) { unsigned int shared_count = list->shared_count, i; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index fd8ee52e17a4..7720b8c22c81 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -1046,7 +1046,6 @@ struct context_barrier_task { void *data; }; -__i915_active_call static void cb_retire(struct i915_active *base) { struct context_barrier_task *cb = container_of(base, typeof(*cb), base); @@ -1080,7 +1079,7 @@ static int context_barrier_task(struct i915_gem_context *ctx, if (!cb) return -ENOMEM; - i915_active_init(&cb->base, NULL, cb_retire); + i915_active_init(&cb->base, NULL, cb_retire, 0); err = i915_active_acquire(&cb->base); if (err) { kfree(cb); @@ -1191,7 +1190,7 @@ static void set_ppgtt_barrier(void *data) { struct i915_address_space *old = data; - if (INTEL_GEN(old->i915) < 8) + if (GRAPHICS_VER(old->i915) < 8) gen6_ppgtt_unpin_all(i915_vm_to_ppgtt(old)); i915_vm_close(old); @@ -1437,7 +1436,7 @@ i915_gem_user_to_context_sseu(struct intel_gt *gt, context->max_eus_per_subslice = user->max_eus_per_subslice; /* Part specific restrictions. */ - if (IS_GEN(i915, 11)) { + if (GRAPHICS_VER(i915) == 11) { unsigned int hw_s = hweight8(device->slice_mask); unsigned int hw_ss_per_s = hweight8(device->subslice_mask[0]); unsigned int req_s = hweight8(context->slice_mask); @@ -1504,7 +1503,7 @@ static int set_sseu(struct i915_gem_context *ctx, if (args->size < sizeof(user_sseu)) return -EINVAL; - if (!IS_GEN(i915, 11)) + if (GRAPHICS_VER(i915) != 11) return -ENODEV; if (copy_from_user(&user_sseu, u64_to_user_ptr(args->value), diff --git a/drivers/gpu/drm/i915/gem/i915_gem_create.c b/drivers/gpu/drm/i915/gem/i915_gem_create.c index 45d60e3d98e3..548ddf39d853 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_create.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_create.c @@ -4,44 +4,102 @@ */ #include "gem/i915_gem_ioctls.h" +#include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" #include "i915_drv.h" +#include "i915_trace.h" +#include "i915_user_extensions.h" + +static u32 object_max_page_size(struct drm_i915_gem_object *obj) +{ + u32 max_page_size = 0; + int i; + + for (i = 0; i < obj->mm.n_placements; i++) { + struct intel_memory_region *mr = obj->mm.placements[i]; + + GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); + max_page_size = max_t(u32, max_page_size, mr->min_page_size); + } + + GEM_BUG_ON(!max_page_size); + return max_page_size; +} + +static void object_set_placements(struct drm_i915_gem_object *obj, + struct intel_memory_region **placements, + unsigned int n_placements) +{ + GEM_BUG_ON(!n_placements); + + /* + * For the common case of one memory region, skip storing an + * allocated array and just point at the region directly. + */ + if (n_placements == 1) { + struct intel_memory_region *mr = placements[0]; + struct drm_i915_private *i915 = mr->i915; + + obj->mm.placements = &i915->mm.regions[mr->id]; + obj->mm.n_placements = 1; + } else { + obj->mm.placements = placements; + obj->mm.n_placements = n_placements; + } +} + +static int i915_gem_publish(struct drm_i915_gem_object *obj, + struct drm_file *file, + u64 *size_p, + u32 *handle_p) +{ + u64 size = obj->base.size; + int ret; + + ret = drm_gem_handle_create(file, &obj->base, handle_p); + /* drop reference from allocate - handle holds it now */ + i915_gem_object_put(obj); + if (ret) + return ret; + + *size_p = size; + return 0; +} static int -i915_gem_create(struct drm_file *file, - struct intel_memory_region *mr, - u64 *size_p, - u32 *handle_p) +i915_gem_setup(struct drm_i915_gem_object *obj, u64 size) { - struct drm_i915_gem_object *obj; - u32 handle; - u64 size; + struct intel_memory_region *mr = obj->mm.placements[0]; + unsigned int flags; int ret; - GEM_BUG_ON(!is_power_of_2(mr->min_page_size)); - size = round_up(*size_p, mr->min_page_size); + size = round_up(size, object_max_page_size(obj)); if (size == 0) return -EINVAL; /* For most of the ABI (e.g. mmap) we think in system pages */ GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE)); - /* Allocate the new object */ - obj = i915_gem_object_create_region(mr, size, 0); - if (IS_ERR(obj)) - return PTR_ERR(obj); + if (i915_gem_object_size_2big(size)) + return -E2BIG; - GEM_BUG_ON(size != obj->base.size); + /* + * For now resort to CPU based clearing for device local-memory, in the + * near future this will use the blitter engine for accelerated, GPU + * based clearing. + */ + flags = 0; + if (mr->type == INTEL_MEMORY_LOCAL) + flags = I915_BO_ALLOC_CPU_CLEAR; - ret = drm_gem_handle_create(file, &obj->base, &handle); - /* drop reference from allocate - handle holds it now */ - i915_gem_object_put(obj); + ret = mr->ops->init_object(mr, obj, size, flags); if (ret) return ret; - *handle_p = handle; - *size_p = size; + GEM_BUG_ON(size != obj->base.size); + + trace_i915_gem_object_create(obj); return 0; } @@ -50,9 +108,12 @@ i915_gem_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args) { + struct drm_i915_gem_object *obj; + struct intel_memory_region *mr; enum intel_memory_type mem_type; int cpp = DIV_ROUND_UP(args->bpp, 8); u32 format; + int ret; switch (cpp) { case 1: @@ -85,10 +146,22 @@ i915_gem_dumb_create(struct drm_file *file, if (HAS_LMEM(to_i915(dev))) mem_type = INTEL_MEMORY_LOCAL; - return i915_gem_create(file, - intel_memory_region_by_type(to_i915(dev), - mem_type), - &args->size, &args->handle); + obj = i915_gem_object_alloc(); + if (!obj) + return -ENOMEM; + + mr = intel_memory_region_by_type(to_i915(dev), mem_type); + object_set_placements(obj, &mr, 1); + + ret = i915_gem_setup(obj, args->size); + if (ret) + goto object_free; + + return i915_gem_publish(obj, file, &args->size, &args->handle); + +object_free: + i915_gem_object_free(obj); + return ret; } /** @@ -103,11 +176,229 @@ i915_gem_create_ioctl(struct drm_device *dev, void *data, { struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_create *args = data; + struct drm_i915_gem_object *obj; + struct intel_memory_region *mr; + int ret; i915_gem_flush_free_objects(i915); - return i915_gem_create(file, - intel_memory_region_by_type(i915, - INTEL_MEMORY_SYSTEM), - &args->size, &args->handle); + obj = i915_gem_object_alloc(); + if (!obj) + return -ENOMEM; + + mr = intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); + object_set_placements(obj, &mr, 1); + + ret = i915_gem_setup(obj, args->size); + if (ret) + goto object_free; + + return i915_gem_publish(obj, file, &args->size, &args->handle); + +object_free: + i915_gem_object_free(obj); + return ret; +} + +struct create_ext { + struct drm_i915_private *i915; + struct drm_i915_gem_object *vanilla_object; +}; + +static void repr_placements(char *buf, size_t size, + struct intel_memory_region **placements, + int n_placements) +{ + int i; + + buf[0] = '\0'; + + for (i = 0; i < n_placements; i++) { + struct intel_memory_region *mr = placements[i]; + int r; + + r = snprintf(buf, size, "\n %s -> { class: %d, inst: %d }", + mr->name, mr->type, mr->instance); + if (r >= size) + return; + + buf += r; + size -= r; + } +} + +static int set_placements(struct drm_i915_gem_create_ext_memory_regions *args, + struct create_ext *ext_data) +{ + struct drm_i915_private *i915 = ext_data->i915; + struct drm_i915_gem_memory_class_instance __user *uregions = + u64_to_user_ptr(args->regions); + struct drm_i915_gem_object *obj = ext_data->vanilla_object; + struct intel_memory_region **placements; + u32 mask; + int i, ret = 0; + + if (args->pad) { + drm_dbg(&i915->drm, "pad should be zero\n"); + ret = -EINVAL; + } + + if (!args->num_regions) { + drm_dbg(&i915->drm, "num_regions is zero\n"); + ret = -EINVAL; + } + + if (args->num_regions > ARRAY_SIZE(i915->mm.regions)) { + drm_dbg(&i915->drm, "num_regions is too large\n"); + ret = -EINVAL; + } + + if (ret) + return ret; + + placements = kmalloc_array(args->num_regions, + sizeof(struct intel_memory_region *), + GFP_KERNEL); + if (!placements) + return -ENOMEM; + + mask = 0; + for (i = 0; i < args->num_regions; i++) { + struct drm_i915_gem_memory_class_instance region; + struct intel_memory_region *mr; + + if (copy_from_user(®ion, uregions, sizeof(region))) { + ret = -EFAULT; + goto out_free; + } + + mr = intel_memory_region_lookup(i915, + region.memory_class, + region.memory_instance); + if (!mr || mr->private) { + drm_dbg(&i915->drm, "Device is missing region { class: %d, inst: %d } at index = %d\n", + region.memory_class, region.memory_instance, i); + ret = -EINVAL; + goto out_dump; + } + + if (mask & BIT(mr->id)) { + drm_dbg(&i915->drm, "Found duplicate placement %s -> { class: %d, inst: %d } at index = %d\n", + mr->name, region.memory_class, + region.memory_instance, i); + ret = -EINVAL; + goto out_dump; + } + + placements[i] = mr; + mask |= BIT(mr->id); + + ++uregions; + } + + if (obj->mm.placements) { + ret = -EINVAL; + goto out_dump; + } + + object_set_placements(obj, placements, args->num_regions); + if (args->num_regions == 1) + kfree(placements); + + return 0; + +out_dump: + if (1) { + char buf[256]; + + if (obj->mm.placements) { + repr_placements(buf, + sizeof(buf), + obj->mm.placements, + obj->mm.n_placements); + drm_dbg(&i915->drm, + "Placements were already set in previous EXT. Existing placements: %s\n", + buf); + } + + repr_placements(buf, sizeof(buf), placements, i); + drm_dbg(&i915->drm, "New placements(so far validated): %s\n", buf); + } + +out_free: + kfree(placements); + return ret; +} + +static int ext_set_placements(struct i915_user_extension __user *base, + void *data) +{ + struct drm_i915_gem_create_ext_memory_regions ext; + + if (!IS_ENABLED(CONFIG_DRM_I915_UNSTABLE_FAKE_LMEM)) + return -ENODEV; + + if (copy_from_user(&ext, base, sizeof(ext))) + return -EFAULT; + + return set_placements(&ext, data); +} + +static const i915_user_extension_fn create_extensions[] = { + [I915_GEM_CREATE_EXT_MEMORY_REGIONS] = ext_set_placements, +}; + +/** + * Creates a new mm object and returns a handle to it. + * @dev: drm device pointer + * @data: ioctl data blob + * @file: drm file pointer + */ +int +i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, + struct drm_file *file) +{ + struct drm_i915_private *i915 = to_i915(dev); + struct drm_i915_gem_create_ext *args = data; + struct create_ext ext_data = { .i915 = i915 }; + struct intel_memory_region **placements_ext; + struct drm_i915_gem_object *obj; + int ret; + + if (args->flags) + return -EINVAL; + + i915_gem_flush_free_objects(i915); + + obj = i915_gem_object_alloc(); + if (!obj) + return -ENOMEM; + + ext_data.vanilla_object = obj; + ret = i915_user_extensions(u64_to_user_ptr(args->extensions), + create_extensions, + ARRAY_SIZE(create_extensions), + &ext_data); + placements_ext = obj->mm.placements; + if (ret) + goto object_free; + + if (!placements_ext) { + struct intel_memory_region *mr = + intel_memory_region_by_type(i915, INTEL_MEMORY_SYSTEM); + + object_set_placements(obj, &mr, 1); + } + + ret = i915_gem_setup(obj, args->size); + if (ret) + goto object_free; + + return i915_gem_publish(obj, file, &args->size, &args->handle); + +object_free: + if (obj->mm.n_placements > 1) + kfree(placements_ext); + i915_gem_object_free(obj); + return ret; } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index ccede73c6465..616c3a2f1baf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -209,7 +209,7 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) if (IS_ERR(pages)) return PTR_ERR(pages); - sg_page_sizes = i915_sg_page_sizes(pages->sgl); + sg_page_sizes = i915_sg_dma_sizes(pages->sgl); __i915_gem_object_set_pages(obj, pages, sg_page_sizes); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 5964e67c7d36..a8abc9af5ff4 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -274,7 +274,7 @@ struct i915_execbuffer { struct drm_mm_node node; /** temporary GTT binding */ unsigned long vaddr; /** Current kmap address */ unsigned long page; /** Currently mapped page index */ - unsigned int gen; /** Cached value of INTEL_GEN */ + unsigned int graphics_ver; /** Cached value of GRAPHICS_VER */ bool use_64bit_reloc : 1; bool has_llc : 1; bool has_fence : 1; @@ -500,7 +500,7 @@ eb_validate_vma(struct i915_execbuffer *eb, * also covers all platforms with local memory. */ if (entry->relocation_count && - INTEL_GEN(eb->i915) >= 12 && !IS_TIGERLAKE(eb->i915)) + GRAPHICS_VER(eb->i915) >= 12 && !IS_TIGERLAKE(eb->i915)) return -EINVAL; if (unlikely(entry->flags & eb->invalid_flags)) @@ -1049,10 +1049,10 @@ static void reloc_cache_init(struct reloc_cache *cache, cache->page = -1; cache->vaddr = 0; /* Must be a variable in the struct to allow GCC to unroll. */ - cache->gen = INTEL_GEN(i915); + cache->graphics_ver = GRAPHICS_VER(i915); cache->has_llc = HAS_LLC(i915); cache->use_64bit_reloc = HAS_64BIT_RELOC(i915); - cache->has_fence = cache->gen < 4; + cache->has_fence = cache->graphics_ver < 4; cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment; cache->node.flags = 0; reloc_cache_clear(cache); @@ -1402,7 +1402,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb, err = eb->engine->emit_bb_start(rq, batch->node.start, PAGE_SIZE, - cache->gen > 5 ? 0 : I915_DISPATCH_SECURE); + cache->graphics_ver > 5 ? 0 : I915_DISPATCH_SECURE); if (err) goto skip_request; @@ -1439,7 +1439,7 @@ err_pool: static bool reloc_can_use_engine(const struct intel_engine_cs *engine) { - return engine->class != VIDEO_DECODE_CLASS || !IS_GEN(engine->i915, 6); + return engine->class != VIDEO_DECODE_CLASS || GRAPHICS_VER(engine->i915) != 6; } static u32 *reloc_gpu(struct i915_execbuffer *eb, @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma) if (DBG_FORCE_RELOC) return false; - return !dma_resv_test_signaled_rcu(vma->resv, true); + return !dma_resv_test_signaled(vma->resv, true); } static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset) @@ -1503,14 +1503,14 @@ static int __reloc_entry_gpu(struct i915_execbuffer *eb, u64 offset, u64 target_addr) { - const unsigned int gen = eb->reloc_cache.gen; + const unsigned int ver = eb->reloc_cache.graphics_ver; unsigned int len; u32 *batch; u64 addr; - if (gen >= 8) + if (ver >= 8) len = offset & 7 ? 8 : 5; - else if (gen >= 4) + else if (ver >= 4) len = 4; else len = 3; @@ -1522,7 +1522,7 @@ static int __reloc_entry_gpu(struct i915_execbuffer *eb, return false; addr = gen8_canonical_addr(vma->node.start + offset); - if (gen >= 8) { + if (ver >= 8) { if (offset & 7) { *batch++ = MI_STORE_DWORD_IMM_GEN4; *batch++ = lower_32_bits(addr); @@ -1542,7 +1542,7 @@ static int __reloc_entry_gpu(struct i915_execbuffer *eb, *batch++ = lower_32_bits(target_addr); *batch++ = upper_32_bits(target_addr); } - } else if (gen >= 6) { + } else if (ver >= 6) { *batch++ = MI_STORE_DWORD_IMM_GEN4; *batch++ = 0; *batch++ = addr; @@ -1552,12 +1552,12 @@ static int __reloc_entry_gpu(struct i915_execbuffer *eb, *batch++ = 0; *batch++ = vma_phys_addr(vma, offset); *batch++ = target_addr; - } else if (gen >= 4) { + } else if (ver >= 4) { *batch++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; *batch++ = 0; *batch++ = addr; *batch++ = target_addr; - } else if (gen >= 3 && + } else if (ver >= 3 && !(IS_I915G(eb->i915) || IS_I915GM(eb->i915))) { *batch++ = MI_STORE_DWORD_IMM | MI_MEM_VIRTUAL; *batch++ = addr; @@ -1671,7 +1671,7 @@ eb_relocate_entry(struct i915_execbuffer *eb, * batchbuffers. */ if (reloc->write_domain == I915_GEM_DOMAIN_INSTRUCTION && - IS_GEN(eb->i915, 6)) { + GRAPHICS_VER(eb->i915) == 6) { err = i915_vma_bind(target->vma, target->vma->obj->cache_level, PIN_GLOBAL, NULL); @@ -2332,7 +2332,7 @@ static int i915_reset_gen7_sol_offsets(struct i915_request *rq) u32 *cs; int i; - if (!IS_GEN(rq->engine->i915, 7) || rq->engine->id != RCS0) { + if (GRAPHICS_VER(rq->engine->i915) != 7 || rq->engine->id != RCS0) { drm_dbg(&rq->engine->i915->drm, "sol reset is gen7/rcs only\n"); return -EINVAL; } @@ -3375,7 +3375,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, eb.batch_flags = 0; if (args->flags & I915_EXEC_SECURE) { - if (INTEL_GEN(i915) >= 11) + if (GRAPHICS_VER(i915) >= 11) return -ENODEV; /* Return -EPERM to trigger fallback code on old binaries. */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h index 7fd22f3efbef..28d6526e32ab 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_ioctls.h @@ -14,6 +14,8 @@ int i915_gem_busy_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int i915_gem_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file); +int i915_gem_create_ext_ioctl(struct drm_device *dev, void *data, + struct drm_file *file); int i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data, struct drm_file *file); int i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c index ce1c83c13d05..3b4aa28a076d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c @@ -4,22 +4,95 @@ */ #include "intel_memory_region.h" +#include "intel_region_ttm.h" #include "gem/i915_gem_region.h" #include "gem/i915_gem_lmem.h" #include "i915_drv.h" +static void lmem_put_pages(struct drm_i915_gem_object *obj, + struct sg_table *pages) +{ + intel_region_ttm_node_free(obj->mm.region, obj->mm.st_mm_node); + obj->mm.dirty = false; + sg_free_table(pages); + kfree(pages); +} + +static int lmem_get_pages(struct drm_i915_gem_object *obj) +{ + unsigned int flags; + struct sg_table *pages; + + flags = I915_ALLOC_MIN_PAGE_SIZE; + if (obj->flags & I915_BO_ALLOC_CONTIGUOUS) + flags |= I915_ALLOC_CONTIGUOUS; + + obj->mm.st_mm_node = intel_region_ttm_node_alloc(obj->mm.region, + obj->base.size, + flags); + if (IS_ERR(obj->mm.st_mm_node)) + return PTR_ERR(obj->mm.st_mm_node); + + /* Range manager is always contigous */ + if (obj->mm.region->is_range_manager) + obj->flags |= I915_BO_ALLOC_CONTIGUOUS; + pages = intel_region_ttm_node_to_st(obj->mm.region, obj->mm.st_mm_node); + if (IS_ERR(pages)) { + intel_region_ttm_node_free(obj->mm.region, obj->mm.st_mm_node); + return PTR_ERR(pages); + } + + __i915_gem_object_set_pages(obj, pages, i915_sg_dma_sizes(pages->sgl)); + + if (obj->flags & I915_BO_ALLOC_CPU_CLEAR) { + void __iomem *vaddr = + i915_gem_object_lmem_io_map(obj, 0, obj->base.size); + + if (!vaddr) { + struct sg_table *pages = + __i915_gem_object_unset_pages(obj); + + if (!IS_ERR_OR_NULL(pages)) + lmem_put_pages(obj, pages); + } + + memset_io(vaddr, 0, obj->base.size); + io_mapping_unmap(vaddr); + } + + return 0; +} + const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = { .name = "i915_gem_object_lmem", .flags = I915_GEM_OBJECT_HAS_IOMEM, - .get_pages = i915_gem_object_get_pages_buddy, - .put_pages = i915_gem_object_put_pages_buddy, + .get_pages = lmem_get_pages, + .put_pages = lmem_put_pages, .release = i915_gem_object_release_memory_region, }; +void __iomem * +i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, + unsigned long n, + unsigned long size) +{ + resource_size_t offset; + + GEM_BUG_ON(!i915_gem_object_is_contiguous(obj)); + + offset = i915_gem_object_get_dma_address(obj, n); + offset -= obj->mm.region->region.start; + + return io_mapping_map_wc(&obj->mm.region->iomap, offset, size); +} + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj) { - return obj->ops == &i915_gem_lmem_obj_ops; + struct intel_memory_region *mr = obj->mm.region; + + return mr && (mr->type == INTEL_MEMORY_LOCAL || + mr->type == INTEL_MEMORY_STOLEN_LOCAL); } struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h index 036d53c01de9..fac6bc5a5ebb 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h @@ -14,6 +14,11 @@ struct intel_memory_region; extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops; +void __iomem * +i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj, + unsigned long n, + unsigned long size); + bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj); struct drm_i915_gem_object * diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c index 8598a1c78a4c..215326764606 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -56,10 +56,18 @@ int i915_gem_mmap_ioctl(struct drm_device *dev, void *data, struct drm_file *file) { + struct drm_i915_private *i915 = to_i915(dev); struct drm_i915_gem_mmap *args = data; struct drm_i915_gem_object *obj; unsigned long addr; + /* + * mmap ioctl is disallowed for all discrete platforms, + * and for all platforms with GRAPHICS_VER > 12. + */ + if (IS_DGFX(i915) || GRAPHICS_VER(i915) > 12) + return -EOPNOTSUPP; + if (args->flags & ~(I915_MMAP_WC)) return -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index ea74cbca95be..5706d471692d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -62,6 +62,13 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj, const struct drm_i915_gem_object_ops *ops, struct lock_class_key *key, unsigned flags) { + /* + * A gem object is embedded both in a struct ttm_buffer_object :/ and + * in a drm_i915_gem_object. Make sure they are aliased. + */ + BUILD_BUG_ON(offsetof(typeof(*obj), base) != + offsetof(typeof(*obj), __do_not_access.base)); + spin_lock_init(&obj->vma.lock); INIT_LIST_HEAD(&obj->vma.list); @@ -249,6 +256,12 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915, if (obj->ops->release) obj->ops->release(obj); + if (obj->mm.n_placements > 1) + kfree(obj->mm.placements); + + if (obj->shares_resv_from) + i915_vm_resv_put(obj->shares_resv_from); + /* But keep the pointer alive for RCU-protected lookups */ call_rcu(&obj->rcu, __i915_gem_free_object_rcu); cond_resched(); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 2ebd79537aea..7c0eb425cb3b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) struct dma_fence *fence; rcu_read_lock(); - fence = dma_resv_get_excl_rcu(obj->base.resv); + fence = dma_resv_get_excl_unlocked(obj->base.resv); rcu_read_unlock(); if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c index df8e8c18c6c9..3e28c68fda3e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c @@ -72,7 +72,7 @@ struct i915_vma *intel_emit_vma_fill_blt(struct intel_context *ce, GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); - if (INTEL_GEN(i915) >= 8) { + if (GRAPHICS_VER(i915) >= 8) { *cmd++ = XY_COLOR_BLT_CMD | BLT_WRITE_RGBA | (7 - 2); *cmd++ = BLT_DEPTH_32 | BLT_ROP_COLOR_COPY | PAGE_SIZE; *cmd++ = 0; @@ -232,7 +232,7 @@ static bool wa_1209644611_applies(struct drm_i915_private *i915, u32 size) { u32 height = size >> PAGE_SHIFT; - if (!IS_GEN(i915, 11)) + if (GRAPHICS_VER(i915) != 11) return false; return height % 4 == 3 && height <= 8; @@ -297,7 +297,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce, size = min_t(u64, rem, block_size); GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX); - if (INTEL_GEN(i915) >= 9 && + if (GRAPHICS_VER(i915) >= 9 && !wa_1209644611_applies(i915, size)) { *cmd++ = GEN9_XY_FAST_COPY_BLT_CMD | (10 - 2); *cmd++ = BLT_DEPTH_32 | PAGE_SIZE; @@ -309,7 +309,7 @@ struct i915_vma *intel_emit_vma_copy_blt(struct intel_context *ce, *cmd++ = PAGE_SIZE; *cmd++ = lower_32_bits(src_offset); *cmd++ = upper_32_bits(src_offset); - } else if (INTEL_GEN(i915) >= 8) { + } else if (GRAPHICS_VER(i915) >= 8) { *cmd++ = XY_SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (10 - 2); *cmd++ = BLT_DEPTH_32 | BLT_ROP_SRC_COPY | PAGE_SIZE; *cmd++ = 0; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 8e485cb3343c..d047ea126029 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -10,6 +10,7 @@ #include <linux/mmu_notifier.h> #include <drm/drm_gem.h> +#include <drm/ttm/ttm_bo_api.h> #include <uapi/drm/i915_drm.h> #include "i915_active.h" @@ -99,7 +100,16 @@ struct i915_gem_object_page_iter { }; struct drm_i915_gem_object { - struct drm_gem_object base; + /* + * We might have reason to revisit the below since it wastes + * a lot of space for non-ttm gem objects. + * In any case, always use the accessors for the ttm_buffer_object + * when accessing it. + */ + union { + struct drm_gem_object base; + struct ttm_buffer_object __do_not_access; + }; const struct drm_i915_gem_object_ops *ops; @@ -149,6 +159,10 @@ struct drm_i915_gem_object { * when i915_gem_ww_ctx_backoff() or i915_gem_ww_ctx_fini() are called. */ struct list_head obj_link; + /** + * @shared_resv_from: The object shares the resv from this vm. + */ + struct i915_address_space *shares_resv_from; union { struct rcu_head rcu; @@ -172,11 +186,13 @@ struct drm_i915_gem_object { #define I915_BO_ALLOC_CONTIGUOUS BIT(0) #define I915_BO_ALLOC_VOLATILE BIT(1) #define I915_BO_ALLOC_STRUCT_PAGE BIT(2) +#define I915_BO_ALLOC_CPU_CLEAR BIT(3) #define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \ I915_BO_ALLOC_VOLATILE | \ - I915_BO_ALLOC_STRUCT_PAGE) -#define I915_BO_READONLY BIT(3) -#define I915_TILING_QUIRK_BIT 4 /* unknown swizzling; do not release! */ + I915_BO_ALLOC_STRUCT_PAGE | \ + I915_BO_ALLOC_CPU_CLEAR) +#define I915_BO_READONLY BIT(4) +#define I915_TILING_QUIRK_BIT 5 /* unknown swizzling; do not release! */ /* * Is the object to be mapped as read-only to the GPU @@ -220,13 +236,21 @@ struct drm_i915_gem_object { atomic_t shrink_pin; /** + * Priority list of potential placements for this object. + */ + struct intel_memory_region **placements; + int n_placements; + + /** * Memory region for this object. */ struct intel_memory_region *region; + /** - * List of memory region blocks allocated for this object. + * Memory manager node allocated for this object. */ - struct list_head blocks; + void *st_mm_node; + /** * Element within memory_region->objects or region->purgeable * if the object is marked as DONTNEED. Access is protected by diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 7361971c177d..6444e097016d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -475,7 +475,8 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, might_sleep(); GEM_BUG_ON(n >= obj->base.size >> PAGE_SHIFT); - GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); + if (!i915_gem_object_has_pinned_pages(obj)) + assert_object_held(obj); /* As we iterate forward through the sg, we record each entry in a * radixtree for quick repeated (backwards) lookups. If we have seen diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 81dc2bf59bc3..be72ad0634ba 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -8,7 +8,6 @@ #include <linux/shmem_fs.h> #include <linux/swap.h> -#include <drm/drm.h> /* for drm_legacy.h! */ #include <drm/drm_cache.h> #include "gt/intel_gt.h" @@ -208,7 +207,7 @@ static int i915_gem_object_shmem_to_phys(struct drm_i915_gem_object *obj) err_xfer: if (!IS_ERR_OR_NULL(pages)) { - unsigned int sg_page_sizes = i915_sg_page_sizes(pages->sgl); + unsigned int sg_page_sizes = i915_sg_dma_sizes(pages->sgl); __i915_gem_object_set_pages(obj, pages, sg_page_sizes); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c index 6a84fb6dde24..f25e6646c5b7 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c @@ -8,107 +8,9 @@ #include "i915_drv.h" #include "i915_trace.h" -void -i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, - struct sg_table *pages) -{ - __intel_memory_region_put_pages_buddy(obj->mm.region, &obj->mm.blocks); - - obj->mm.dirty = false; - sg_free_table(pages); - kfree(pages); -} - -int -i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj) -{ - const u64 max_segment = i915_sg_segment_size(); - struct intel_memory_region *mem = obj->mm.region; - struct list_head *blocks = &obj->mm.blocks; - resource_size_t size = obj->base.size; - resource_size_t prev_end; - struct i915_buddy_block *block; - unsigned int flags; - struct sg_table *st; - struct scatterlist *sg; - unsigned int sg_page_sizes; - int ret; - - st = kmalloc(sizeof(*st), GFP_KERNEL); - if (!st) - return -ENOMEM; - - if (sg_alloc_table(st, size >> PAGE_SHIFT, GFP_KERNEL)) { - kfree(st); - return -ENOMEM; - } - - flags = I915_ALLOC_MIN_PAGE_SIZE; - if (obj->flags & I915_BO_ALLOC_CONTIGUOUS) - flags |= I915_ALLOC_CONTIGUOUS; - - ret = __intel_memory_region_get_pages_buddy(mem, size, flags, blocks); - if (ret) - goto err_free_sg; - - GEM_BUG_ON(list_empty(blocks)); - - sg = st->sgl; - st->nents = 0; - sg_page_sizes = 0; - prev_end = (resource_size_t)-1; - - list_for_each_entry(block, blocks, link) { - u64 block_size, offset; - - block_size = min_t(u64, size, - i915_buddy_block_size(&mem->mm, block)); - offset = i915_buddy_block_offset(block); - - while (block_size) { - u64 len; - - if (offset != prev_end || sg->length >= max_segment) { - if (st->nents) { - sg_page_sizes |= sg->length; - sg = __sg_next(sg); - } - - sg_dma_address(sg) = mem->region.start + offset; - sg_dma_len(sg) = 0; - sg->length = 0; - st->nents++; - } - - len = min(block_size, max_segment - sg->length); - sg->length += len; - sg_dma_len(sg) += len; - - offset += len; - block_size -= len; - - prev_end = offset; - } - } - - sg_page_sizes |= sg->length; - sg_mark_end(sg); - i915_sg_trim(st); - - __i915_gem_object_set_pages(obj, st, sg_page_sizes); - - return 0; - -err_free_sg: - sg_free_table(st); - kfree(st); - return ret; -} - void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, struct intel_memory_region *mem) { - INIT_LIST_HEAD(&obj->mm.blocks); obj->mm.region = intel_memory_region_get(mem); if (obj->base.size <= mem->min_page_size) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h index ebddc86d78f7..84fcb3297400 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_region.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h @@ -12,10 +12,6 @@ struct intel_memory_region; struct drm_i915_gem_object; struct sg_table; -int i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj); -void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj, - struct sg_table *pages); - void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj, struct intel_memory_region *mem); void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index a9bfa66c8da1..5d16c4462fda 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -628,11 +628,13 @@ static const struct intel_memory_region_ops shmem_region_ops = { .init_object = shmem_object_init, }; -struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915) +struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915, + u16 type, u16 instance) { return intel_memory_region_create(i915, 0, totalram_pages() << PAGE_SHIFT, PAGE_SIZE, 0, + type, instance, &shmem_region_ops); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c index 4f9c8d3021ab..f4fb68e8955a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shrinker.c @@ -38,15 +38,17 @@ static bool can_release_pages(struct drm_i915_gem_object *obj) } static bool unsafe_drop_pages(struct drm_i915_gem_object *obj, - unsigned long shrink) + unsigned long shrink, bool trylock_vm) { unsigned long flags; flags = 0; if (shrink & I915_SHRINK_ACTIVE) - flags = I915_GEM_OBJECT_UNBIND_ACTIVE; + flags |= I915_GEM_OBJECT_UNBIND_ACTIVE; if (!(shrink & I915_SHRINK_BOUND)) - flags = I915_GEM_OBJECT_UNBIND_TEST; + flags |= I915_GEM_OBJECT_UNBIND_TEST; + if (trylock_vm) + flags |= I915_GEM_OBJECT_UNBIND_VM_TRYLOCK; if (i915_gem_object_unbind(obj, flags) == 0) return true; @@ -117,6 +119,9 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww, unsigned long scanned = 0; int err; + /* CHV + VTD workaround use stop_machine(); need to trylock vm->mutex */ + bool trylock_vm = !ww && intel_vm_no_concurrent_access_wa(i915); + trace_i915_gem_shrink(i915, target, shrink); /* @@ -204,7 +209,7 @@ i915_gem_shrink(struct i915_gem_ww_ctx *ww, spin_unlock_irqrestore(&i915->mm.obj_lock, flags); err = 0; - if (unsafe_drop_pages(obj, shrink)) { + if (unsafe_drop_pages(obj, shrink, trylock_vm)) { /* May arrive from get_pages on another bo */ if (!ww) { if (!i915_gem_object_trylock(obj)) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c index b0597de206de..b0c3a7dc60d1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c @@ -10,6 +10,7 @@ #include <drm/drm_mm.h> #include <drm/i915_drm.h> +#include "gem/i915_gem_lmem.h" #include "gem/i915_gem_region.h" #include "i915_drv.h" #include "i915_gem_stolen.h" @@ -37,7 +38,7 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *i915, return -ENODEV; /* WaSkipStolenMemoryFirstPage:bdw+ */ - if (INTEL_GEN(i915) >= 8 && start < 4096) + if (GRAPHICS_VER(i915) >= 8 && start < 4096) start = 4096; mutex_lock(&i915->mm.stolen_lock); @@ -83,14 +84,14 @@ static int i915_adjust_stolen(struct drm_i915_private *i915, */ /* Make sure we don't clobber the GTT if it's within stolen memory */ - if (INTEL_GEN(i915) <= 4 && + if (GRAPHICS_VER(i915) <= 4 && !IS_G33(i915) && !IS_PINEVIEW(i915) && !IS_G4X(i915)) { struct resource stolen[2] = {*dsm, *dsm}; struct resource ggtt_res; resource_size_t ggtt_start; ggtt_start = intel_uncore_read(uncore, PGTBL_CTL); - if (IS_GEN(i915, 4)) + if (GRAPHICS_VER(i915) == 4) ggtt_start = (ggtt_start & PGTBL_ADDRESS_LO_MASK) | (ggtt_start & PGTBL_ADDRESS_HI_MASK) << 28; else @@ -122,6 +123,14 @@ static int i915_adjust_stolen(struct drm_i915_private *i915, } /* + * With stolen lmem, we don't need to check if the address range + * overlaps with the non-stolen system memory range, since lmem is local + * to the gpu. + */ + if (HAS_LMEM(i915)) + return 0; + + /* * Verify that nothing else uses this physical address. Stolen * memory should be reserved by the BIOS and hidden from the * kernel. So if the region is already marked as busy, something @@ -147,7 +156,7 @@ static int i915_adjust_stolen(struct drm_i915_private *i915, * GEN3 firmware likes to smash pci bridges into the stolen * range. Apparently this works. */ - if (!r && !IS_GEN(i915, 3)) { + if (!r && GRAPHICS_VER(i915) != 3) { drm_err(&i915->drm, "conflict detected with stolen region: %pR\n", dsm); @@ -188,7 +197,7 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *i915, * Whether ILK really reuses the ELK register for this is unclear. * Let's see if we catch anyone with this supposedly enabled on ILK. */ - drm_WARN(&i915->drm, IS_GEN(i915, 5), + drm_WARN(&i915->drm, GRAPHICS_VER(i915) == 5, "ILK stolen reserved found? 0x%08x\n", reg_val); @@ -374,8 +383,9 @@ static void icl_get_stolen_reserved(struct drm_i915_private *i915, } } -static int i915_gem_init_stolen(struct drm_i915_private *i915) +static int i915_gem_init_stolen(struct intel_memory_region *mem) { + struct drm_i915_private *i915 = mem->i915; struct intel_uncore *uncore = &i915->uncore; resource_size_t reserved_base, stolen_top; resource_size_t reserved_total, reserved_size; @@ -389,17 +399,17 @@ static int i915_gem_init_stolen(struct drm_i915_private *i915) return 0; } - if (intel_vtd_active() && INTEL_GEN(i915) < 8) { + if (intel_vtd_active() && GRAPHICS_VER(i915) < 8) { drm_notice(&i915->drm, "%s, disabling use of stolen memory\n", "DMAR active"); return 0; } - if (resource_size(&intel_graphics_stolen_res) == 0) + if (resource_size(&mem->region) == 0) return 0; - i915->dsm = intel_graphics_stolen_res; + i915->dsm = mem->region; if (i915_adjust_stolen(i915, &i915->dsm)) return 0; @@ -411,7 +421,7 @@ static int i915_gem_init_stolen(struct drm_i915_private *i915) reserved_base = stolen_top; reserved_size = 0; - switch (INTEL_GEN(i915)) { + switch (GRAPHICS_VER(i915)) { case 2: case 3: break; @@ -446,7 +456,7 @@ static int i915_gem_init_stolen(struct drm_i915_private *i915) &reserved_base, &reserved_size); break; default: - MISSING_CASE(INTEL_GEN(i915)); + MISSING_CASE(GRAPHICS_VER(i915)); fallthrough; case 11: case 12: @@ -627,10 +637,17 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem, { static struct lock_class_key lock_class; unsigned int cache_level; + unsigned int flags; int err; + /* + * Stolen objects are always physically contiguous since we just + * allocate one big block underneath using the drm_mm range allocator. + */ + flags = I915_BO_ALLOC_CONTIGUOUS; + drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size); - i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, 0); + i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class, flags); obj->stolen = stolen; obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT; @@ -640,9 +657,11 @@ static int __i915_gem_object_create_stolen(struct intel_memory_region *mem, if (WARN_ON(!i915_gem_object_trylock(obj))) return -EBUSY; + i915_gem_object_init_memory_region(obj, mem); + err = i915_gem_object_pin_pages(obj); - if (!err) - i915_gem_object_init_memory_region(obj, mem); + if (err) + i915_gem_object_release_memory_region(obj); i915_gem_object_unlock(obj); return err; @@ -667,7 +686,8 @@ static int _i915_gem_object_stolen_init(struct intel_memory_region *mem, if (!stolen) return -ENOMEM; - ret = i915_gem_stolen_insert_node(i915, stolen, size, 4096); + ret = i915_gem_stolen_insert_node(i915, stolen, size, + mem->min_page_size); if (ret) goto err_free; @@ -688,39 +708,128 @@ struct drm_i915_gem_object * i915_gem_object_create_stolen(struct drm_i915_private *i915, resource_size_t size) { - return i915_gem_object_create_region(i915->mm.regions[INTEL_REGION_STOLEN_SMEM], - size, I915_BO_ALLOC_CONTIGUOUS); + return i915_gem_object_create_region(i915->mm.stolen_region, size, 0); } -static int init_stolen(struct intel_memory_region *mem) +static int init_stolen_smem(struct intel_memory_region *mem) { - intel_memory_region_set_name(mem, "stolen"); - /* * Initialise stolen early so that we may reserve preallocated * objects for the BIOS to KMS transition. */ - return i915_gem_init_stolen(mem->i915); + return i915_gem_init_stolen(mem); +} + +static void release_stolen_smem(struct intel_memory_region *mem) +{ + i915_gem_cleanup_stolen(mem->i915); +} + +static const struct intel_memory_region_ops i915_region_stolen_smem_ops = { + .init = init_stolen_smem, + .release = release_stolen_smem, + .init_object = _i915_gem_object_stolen_init, +}; + +static int init_stolen_lmem(struct intel_memory_region *mem) +{ + int err; + + if (GEM_WARN_ON(resource_size(&mem->region) == 0)) + return -ENODEV; + + if (!io_mapping_init_wc(&mem->iomap, + mem->io_start, + resource_size(&mem->region))) + return -EIO; + + /* + * TODO: For stolen lmem we mostly just care about populating the dsm + * related bits and setting up the drm_mm allocator for the range. + * Perhaps split up i915_gem_init_stolen() for this. + */ + err = i915_gem_init_stolen(mem); + if (err) + goto err_fini; + + return 0; + +err_fini: + io_mapping_fini(&mem->iomap); + return err; } -static void release_stolen(struct intel_memory_region *mem) +static void release_stolen_lmem(struct intel_memory_region *mem) { + io_mapping_fini(&mem->iomap); i915_gem_cleanup_stolen(mem->i915); } -static const struct intel_memory_region_ops i915_region_stolen_ops = { - .init = init_stolen, - .release = release_stolen, +static const struct intel_memory_region_ops i915_region_stolen_lmem_ops = { + .init = init_stolen_lmem, + .release = release_stolen_lmem, .init_object = _i915_gem_object_stolen_init, }; -struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915) +struct intel_memory_region * +i915_gem_stolen_lmem_setup(struct drm_i915_private *i915, u16 type, + u16 instance) +{ + struct intel_uncore *uncore = &i915->uncore; + struct pci_dev *pdev = to_pci_dev(i915->drm.dev); + struct intel_memory_region *mem; + resource_size_t io_start; + resource_size_t lmem_size; + u64 lmem_base; + + lmem_base = intel_uncore_read64(uncore, GEN12_DSMBASE); + if (GEM_WARN_ON(lmem_base >= pci_resource_len(pdev, 2))) + return ERR_PTR(-ENODEV); + + lmem_size = pci_resource_len(pdev, 2) - lmem_base; + io_start = pci_resource_start(pdev, 2) + lmem_base; + + mem = intel_memory_region_create(i915, lmem_base, lmem_size, + I915_GTT_PAGE_SIZE_4K, io_start, + type, instance, + &i915_region_stolen_lmem_ops); + if (IS_ERR(mem)) + return mem; + + /* + * TODO: consider creating common helper to just print all the + * interesting stuff from intel_memory_region, which we can use for all + * our probed regions. + */ + + drm_dbg(&i915->drm, "Stolen Local memory IO start: %pa\n", + &mem->io_start); + + intel_memory_region_set_name(mem, "stolen-local"); + + mem->private = true; + + return mem; +} + +struct intel_memory_region* +i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, + u16 instance) { - return intel_memory_region_create(i915, - intel_graphics_stolen_res.start, - resource_size(&intel_graphics_stolen_res), - PAGE_SIZE, 0, - &i915_region_stolen_ops); + struct intel_memory_region *mem; + + mem = intel_memory_region_create(i915, + intel_graphics_stolen_res.start, + resource_size(&intel_graphics_stolen_res), + PAGE_SIZE, 0, type, instance, + &i915_region_stolen_smem_ops); + if (IS_ERR(mem)) + return mem; + + intel_memory_region_set_name(mem, "stolen-system"); + + mem->private = true; + return mem; } struct drm_i915_gem_object * @@ -728,7 +837,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915, resource_size_t stolen_offset, resource_size_t size) { - struct intel_memory_region *mem = i915->mm.regions[INTEL_REGION_STOLEN_SMEM]; + struct intel_memory_region *mem = i915->mm.stolen_region; struct drm_i915_gem_object *obj; struct drm_mm_node *stolen; int ret; @@ -742,8 +851,8 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915, /* KISS and expect everything to be page-aligned */ if (GEM_WARN_ON(size == 0) || - GEM_WARN_ON(!IS_ALIGNED(size, I915_GTT_PAGE_SIZE)) || - GEM_WARN_ON(!IS_ALIGNED(stolen_offset, I915_GTT_MIN_ALIGNMENT))) + GEM_WARN_ON(!IS_ALIGNED(size, mem->min_page_size)) || + GEM_WARN_ON(!IS_ALIGNED(stolen_offset, mem->min_page_size))) return ERR_PTR(-EINVAL); stolen = kzalloc(sizeof(*stolen), GFP_KERNEL); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h index b03489706796..ccdf7befc571 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h @@ -21,7 +21,13 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv, u64 end); void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv, struct drm_mm_node *node); -struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915); +struct intel_memory_region * +i915_gem_stolen_smem_setup(struct drm_i915_private *i915, u16 type, + u16 instance); +struct intel_memory_region * +i915_gem_stolen_lmem_setup(struct drm_i915_private *i915, u16 type, + u16 instance); + struct drm_i915_gem_object * i915_gem_object_create_stolen(struct drm_i915_private *dev_priv, resource_size_t size); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c index 9e8945013090..ef4d0f7dc118 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c @@ -62,14 +62,14 @@ u32 i915_gem_fence_size(struct drm_i915_private *i915, GEM_BUG_ON(!stride); - if (INTEL_GEN(i915) >= 4) { + if (GRAPHICS_VER(i915) >= 4) { stride *= i915_gem_tile_height(tiling); GEM_BUG_ON(!IS_ALIGNED(stride, I965_FENCE_PAGE)); return roundup(size, stride); } /* Previous chips need a power-of-two fence region when tiling */ - if (IS_GEN(i915, 3)) + if (GRAPHICS_VER(i915) == 3) ggtt_size = 1024*1024; else ggtt_size = 512*1024; @@ -102,7 +102,7 @@ u32 i915_gem_fence_alignment(struct drm_i915_private *i915, u32 size, if (tiling == I915_TILING_NONE) return I915_GTT_MIN_ALIGNMENT; - if (INTEL_GEN(i915) >= 4) + if (GRAPHICS_VER(i915) >= 4) return I965_FENCE_PAGE; /* @@ -130,10 +130,10 @@ i915_tiling_ok(struct drm_i915_gem_object *obj, /* check maximum stride & object size */ /* i965+ stores the end address of the gtt mapping in the fence * reg, so dont bother to check the size */ - if (INTEL_GEN(i915) >= 7) { + if (GRAPHICS_VER(i915) >= 7) { if (stride / 128 > GEN7_FENCE_MAX_PITCH_VAL) return false; - } else if (INTEL_GEN(i915) >= 4) { + } else if (GRAPHICS_VER(i915) >= 4) { if (stride / 128 > I965_FENCE_MAX_PITCH_VAL) return false; } else { @@ -144,7 +144,7 @@ i915_tiling_ok(struct drm_i915_gem_object *obj, return false; } - if (IS_GEN(i915, 2) || + if (GRAPHICS_VER(i915) == 2 || (tiling == I915_TILING_Y && HAS_128_BYTE_Y_TILING(i915))) tile_width = 128; else diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index a657b99ec760..7487bab11f0b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni, return true; /* we will unbind on next submission, still have userptr pins */ - r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout(obj->base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); if (r <= 0) drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r); @@ -173,7 +173,7 @@ alloc_table: goto err; } - sg_page_sizes = i915_sg_page_sizes(st->sgl); + sg_page_sizes = i915_sg_dma_sizes(st->sgl); __i915_gem_object_set_pages(obj, st, sg_page_sizes); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c index 4b9856d5ba14..1e97520c62b2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, unsigned int count, i; int ret; - ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared); + ret = dma_resv_get_fences(resv, &excl, &count, &shared); if (ret) return ret; @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, */ prune_fences = count && timeout >= 0; } else { - excl = dma_resv_get_excl_rcu(resv); + excl = dma_resv_get_excl_unlocked(resv); } if (excl && timeout >= 0) @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, unsigned int count, i; int ret; - ret = dma_resv_get_fences_rcu(obj->base.resv, - &excl, &count, &shared); + ret = dma_resv_get_fences(obj->base.resv, &excl, &count, + &shared); if (ret) return ret; @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, kfree(shared); } else { - excl = dma_resv_get_excl_rcu(obj->base.resv); + excl = dma_resv_get_excl_unlocked(obj->base.resv); } if (excl) { diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c index d36873885cc1..176e6b22f87f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c @@ -152,8 +152,8 @@ static int prepare_blit(const struct tiled_blits *t, struct blit_buffer *src, struct drm_i915_gem_object *batch) { - const int gen = INTEL_GEN(to_i915(batch->base.dev)); - bool use_64b_reloc = gen >= 8; + const int ver = GRAPHICS_VER(to_i915(batch->base.dev)); + bool use_64b_reloc = ver >= 8; u32 src_pitch, dst_pitch; u32 cmd, *cs; @@ -171,7 +171,7 @@ static int prepare_blit(const struct tiled_blits *t, *cs++ = cmd; cmd = MI_FLUSH_DW; - if (gen >= 8) + if (ver >= 8) cmd++; *cs++ = cmd; *cs++ = 0; @@ -179,7 +179,7 @@ static int prepare_blit(const struct tiled_blits *t, *cs++ = 0; cmd = XY_SRC_COPY_BLT_CMD | BLT_WRITE_RGBA | (8 - 2); - if (gen >= 8) + if (ver >= 8) cmd += 2; src_pitch = t->width * 4; @@ -666,7 +666,7 @@ static int igt_client_tiled_blits(void *arg) int inst = 0; /* Test requires explicit BLT tiling controls */ - if (INTEL_GEN(i915) < 4) + if (GRAPHICS_VER(i915) < 4) return 0; if (bad_swizzling(i915)) /* Requires sane (sub-page) swizzling */ diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c index e937b6629019..13b088cc787e 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c @@ -221,12 +221,12 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v) goto out_rq; } - if (INTEL_GEN(ctx->engine->i915) >= 8) { + if (GRAPHICS_VER(ctx->engine->i915) >= 8) { *cs++ = MI_STORE_DWORD_IMM_GEN4 | 1 << 22; *cs++ = lower_32_bits(i915_ggtt_offset(vma) + offset); *cs++ = upper_32_bits(i915_ggtt_offset(vma) + offset); *cs++ = v; - } else if (INTEL_GEN(ctx->engine->i915) >= 4) { + } else if (GRAPHICS_VER(ctx->engine->i915) >= 4) { *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT; *cs++ = 0; *cs++ = i915_ggtt_offset(vma) + offset; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index 5fef592390cb..dbcfa28a9d91 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -897,7 +897,7 @@ static int rpcs_query_batch(struct drm_i915_gem_object *rpcs, struct i915_vma *v { u32 *cmd; - GEM_BUG_ON(INTEL_GEN(vma->vm->i915) < 8); + GEM_BUG_ON(GRAPHICS_VER(vma->vm->i915) < 8); cmd = i915_gem_object_pin_map(rpcs, I915_MAP_WB); if (IS_ERR(cmd)) @@ -932,7 +932,7 @@ emit_rpcs_query(struct drm_i915_gem_object *obj, GEM_BUG_ON(!intel_engine_can_store_dword(ce->engine)); - if (INTEL_GEN(i915) < 8) + if (GRAPHICS_VER(i915) < 8) return -EINVAL; vma = i915_vma_instance(obj, ce->vm, NULL); @@ -1100,7 +1100,7 @@ __read_slice_count(struct intel_context *ce, return ret; } - if (INTEL_GEN(ce->engine->i915) >= 11) { + if (GRAPHICS_VER(ce->engine->i915) >= 11) { s_mask = GEN11_RPCS_S_CNT_MASK; s_shift = GEN11_RPCS_S_CNT_SHIFT; } else { @@ -1229,7 +1229,7 @@ __igt_ctx_sseu(struct drm_i915_private *i915, int inst = 0; int ret = 0; - if (INTEL_GEN(i915) < 9) + if (GRAPHICS_VER(i915) < 9) return 0; if (flags & TEST_RESET) @@ -1518,7 +1518,7 @@ static int write_to_scratch(struct i915_gem_context *ctx, } *cmd++ = MI_STORE_DWORD_IMM_GEN4; - if (INTEL_GEN(i915) >= 8) { + if (GRAPHICS_VER(i915) >= 8) { *cmd++ = lower_32_bits(offset); *cmd++ = upper_32_bits(offset); } else { @@ -1608,7 +1608,7 @@ static int read_from_scratch(struct i915_gem_context *ctx, if (IS_ERR(obj)) return PTR_ERR(obj); - if (INTEL_GEN(i915) >= 8) { + if (GRAPHICS_VER(i915) >= 8) { const u32 GPR0 = engine->mmio_base + 0x600; vm = i915_gem_context_get_vm_rcu(ctx); @@ -1740,7 +1740,6 @@ out: static int check_scratch_page(struct i915_gem_context *ctx, u32 *out) { struct i915_address_space *vm; - struct page *page; u32 *vaddr; int err = 0; @@ -1748,24 +1747,18 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out) if (!vm) return -ENODEV; - page = __px_page(vm->scratch[0]); - if (!page) { + if (!vm->scratch[0]) { pr_err("No scratch page!\n"); return -EINVAL; } - vaddr = kmap(page); - if (!vaddr) { - pr_err("No (mappable) scratch page!\n"); - return -EINVAL; - } + vaddr = __px_vaddr(vm->scratch[0]); memcpy(out, vaddr, sizeof(*out)); if (memchr_inv(vaddr, *out, PAGE_SIZE)) { pr_err("Inconsistent initial state of scratch page!\n"); err = -EINVAL; } - kunmap(page); return err; } @@ -1783,7 +1776,7 @@ static int igt_vm_isolation(void *arg) u32 expected; int err; - if (INTEL_GEN(i915) < 7) + if (GRAPHICS_VER(i915) < 7) return 0; /* @@ -1837,7 +1830,7 @@ static int igt_vm_isolation(void *arg) continue; /* Not all engines have their own GPR! */ - if (INTEL_GEN(i915) < 8 && engine->class != RENDER_CLASS) + if (GRAPHICS_VER(i915) < 8 && engine->class != RENDER_CLASS) continue; while (!__igt_timeout(end_time, NULL)) { diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 35c15ef1327d..5575172c66f5 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -273,7 +273,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, static unsigned int setup_tile_size(struct tile *tile, struct drm_i915_private *i915) { - if (INTEL_GEN(i915) <= 2) { + if (GRAPHICS_VER(i915) <= 2) { tile->height = 16; tile->width = 128; tile->size = 11; @@ -288,9 +288,9 @@ setup_tile_size(struct tile *tile, struct drm_i915_private *i915) tile->size = 12; } - if (INTEL_GEN(i915) < 4) + if (GRAPHICS_VER(i915) < 4) return 8192 / tile->width; - else if (INTEL_GEN(i915) < 7) + else if (GRAPHICS_VER(i915) < 7) return 128 * I965_FENCE_MAX_PITCH_VAL / tile->width; else return 128 * GEN7_FENCE_MAX_PITCH_VAL / tile->width; @@ -386,7 +386,7 @@ static int igt_partial_tiling(void *arg) if (err) goto out_unlock; - if (pitch > 2 && INTEL_GEN(i915) >= 4) { + if (pitch > 2 && GRAPHICS_VER(i915) >= 4) { tile.stride = tile.width * (pitch - 1); err = check_partial_mappings(obj, &tile, end); if (err == -EINTR) @@ -395,7 +395,7 @@ static int igt_partial_tiling(void *arg) goto out_unlock; } - if (pitch < max_pitch && INTEL_GEN(i915) >= 4) { + if (pitch < max_pitch && GRAPHICS_VER(i915) >= 4) { tile.stride = tile.width * (pitch + 1); err = check_partial_mappings(obj, &tile, end); if (err == -EINTR) @@ -405,7 +405,7 @@ static int igt_partial_tiling(void *arg) } } - if (INTEL_GEN(i915) >= 4) { + if (GRAPHICS_VER(i915) >= 4) { for_each_prime_number(pitch, max_pitch) { tile.stride = tile.width * pitch; err = check_partial_mappings(obj, &tile, end); @@ -501,7 +501,7 @@ static int igt_smoke_tiling(void *arg) tile.stride = i915_prandom_u32_max_state(max_pitch, &prng); tile.stride = (1 + tile.stride) * tile.width; - if (INTEL_GEN(i915) < 4) + if (GRAPHICS_VER(i915) < 4) tile.stride = rounddown_pow_of_two(tile.stride); } @@ -842,6 +842,24 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type) return true; } +static void object_set_placements(struct drm_i915_gem_object *obj, + struct intel_memory_region **placements, + unsigned int n_placements) +{ + GEM_BUG_ON(!n_placements); + + if (n_placements == 1) { + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_memory_region *mr = placements[0]; + + obj->mm.placements = &i915->mm.regions[mr->id]; + obj->mm.n_placements = 1; + } else { + obj->mm.placements = placements; + obj->mm.n_placements = n_placements; + } +} + #define expand32(x) (((x) << 0) | ((x) << 8) | ((x) << 16) | ((x) << 24)) static int __igt_mmap(struct drm_i915_private *i915, struct drm_i915_gem_object *obj, @@ -950,6 +968,8 @@ static int igt_mmap(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); + object_set_placements(obj, &mr, 1); + err = __igt_mmap(i915, obj, I915_MMAP_TYPE_GTT); if (err == 0) err = __igt_mmap(i915, obj, I915_MMAP_TYPE_WC); @@ -1068,6 +1088,8 @@ static int igt_mmap_access(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); + object_set_placements(obj, &mr, 1); + err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_GTT); if (err == 0) err = __igt_mmap_access(i915, obj, I915_MMAP_TYPE_WB); @@ -1211,6 +1233,8 @@ static int igt_mmap_gpu(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); + object_set_placements(obj, &mr, 1); + err = __igt_mmap_gpu(i915, obj, I915_MMAP_TYPE_GTT); if (err == 0) err = __igt_mmap_gpu(i915, obj, I915_MMAP_TYPE_WC); @@ -1354,6 +1378,8 @@ static int igt_mmap_revoke(void *arg) if (IS_ERR(obj)) return PTR_ERR(obj); + object_set_placements(obj, &mr, 1); + err = __igt_mmap_revoke(i915, obj, I915_MMAP_TYPE_GTT); if (err == 0) err = __igt_mmap_revoke(i915, obj, I915_MMAP_TYPE_WC); diff --git a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c index 0b092c62bb34..b35c1219c852 100644 --- a/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c +++ b/drivers/gpu/drm/i915/gem/selftests/igt_gem_utils.c @@ -44,7 +44,7 @@ igt_emit_store_dw(struct i915_vma *vma, u32 val) { struct drm_i915_gem_object *obj; - const int gen = INTEL_GEN(vma->vm->i915); + const int ver = GRAPHICS_VER(vma->vm->i915); unsigned long n, size; u32 *cmd; int err; @@ -65,14 +65,14 @@ igt_emit_store_dw(struct i915_vma *vma, offset += vma->node.start; for (n = 0; n < count; n++) { - if (gen >= 8) { + if (ver >= 8) { *cmd++ = MI_STORE_DWORD_IMM_GEN4; *cmd++ = lower_32_bits(offset); *cmd++ = upper_32_bits(offset); *cmd++ = val; - } else if (gen >= 4) { + } else if (ver >= 4) { *cmd++ = MI_STORE_DWORD_IMM_GEN4 | - (gen < 6 ? MI_USE_GGTT : 0); + (ver < 6 ? MI_USE_GGTT : 0); *cmd++ = 0; *cmd++ = offset; *cmd++ = val; @@ -146,7 +146,7 @@ int igt_gpu_fill_dw(struct intel_context *ce, goto skip_request; flags = 0; - if (INTEL_GEN(ce->vm->i915) <= 5) + if (GRAPHICS_VER(ce->vm->i915) <= 5) flags |= I915_DISPATCH_SECURE; err = rq->engine->emit_bb_start(rq, |