diff options
author | Nirmoy Das <nirmoy.das@amd.com> | 2020-05-04 17:40:35 +0200 |
---|---|---|
committer | Christian König <christian.koenig@amd.com> | 2020-05-05 13:39:38 +0200 |
commit | 0cdea4455acd350a7f62406478e3d6d1f764cef9 (patch) | |
tree | c456be712b0d624b392089daf181c4eb8065db18 /include | |
parent | b7301fd812a3b103df422826c830dc9a979b2908 (diff) |
drm/mm: optimize rb_hole_addr rbtree search
Userspace can severely fragment rb_hole_addr rbtree by manipulating
alignment while allocating buffers. Fragmented rb_hole_addr rbtree
would result in large delays while allocating buffer object for a
userspace application. It takes long time to find suitable hole
because if we fail to find a suitable hole in the first attempt
then we look for neighbouring nodes using rb_prev()/rb_next().
Traversing rbtree using rb_prev()/rb_next() can take really long
time if the tree is fragmented.
This patch improves searches in fragmented rb_hole_addr rbtree by
modifying it to an augmented rbtree which will store an extra field
in drm_mm_node, subtree_max_hole. Each drm_mm_node now stores maximum
hole size for its subtree in drm_mm_node->subtree_max_hole. Using
drm_mm_node->subtree_max_hole, it is possible to eliminate a complete
subtree if that subtree is unable to serve a request hence reducing
number of rb_prev()/rb_next() used.
With this patch applied, 1 million bo allocs on amdgpu took ~8 sec,
compared to 50k bo allocs which took 28 sec without it.
partial test code:
int test_fragmentation(void)
{
int i = 0;
uint32_t minor_version;
uint32_t major_version;
struct amdgpu_bo_alloc_request request = {};
amdgpu_bo_handle vram_handle[MAX_ALLOC] = {};
amdgpu_device_handle device_handle;
request.alloc_size = 4096;
request.phys_alignment = 8192;
request.preferred_heap = AMDGPU_GEM_DOMAIN_VRAM;
int fd = open("/dev/dri/card0", O_RDWR | O_CLOEXEC);
amdgpu_device_initialize(fd, &major_version, &minor_version,
&device_handle);
for (i = 0; i < MAX_ALLOC; i++) {
amdgpu_bo_alloc(device_handle, &request, &vram_handle[i]);
}
for (i = 0; i < MAX_ALLOC; i++)
amdgpu_bo_free(vram_handle[i]);
return 0;
}
v2:
Use RB_DECLARE_CALLBACKS_MAX to maintain subtree_max_hole
v3:
insert_hole_addr() should be static a function
fix return value of next_hole_high_addr()/next_hole_low_addr()
Reported-by: kbuild test robot <lkp@intel.com>
v4:
Fix commit message.
Signed-off-by: Nirmoy Das <nirmoy.das@amd.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/364341/
Signed-off-by: Christian König <christian.koenig@amd.com>
Diffstat (limited to 'include')
-rw-r--r-- | include/drm/drm_mm.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index ee8b0e80ca90..a01bc6fac83c 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -168,6 +168,7 @@ struct drm_mm_node { struct rb_node rb_hole_addr; u64 __subtree_last; u64 hole_size; + u64 subtree_max_hole; unsigned long flags; #define DRM_MM_NODE_ALLOCATED_BIT 0 #define DRM_MM_NODE_SCANNED_BIT 1 |