summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/i915
diff options
context:
space:
mode:
authorChris Wilson <chris@chris-wilson.co.uk>2013-07-09 09:22:39 +0100
committerDaniel Vetter <daniel.vetter@ffwll.ch>2013-07-10 10:41:55 +0200
commit02978ff57a5bdfbf703d2bc5a4d933a53ede3144 (patch)
treed670634cd62c3e5fd6bb71c24157a1cecf98786a /drivers/gpu/drm/i915
parentc11e5f35ab490bd30591563816fbc83526521777 (diff)
drm/i915: Fix write-read race with multiple rings
Daniel noticed a problem where is we wrote to an object with ring A in the middle of a very long running batch, then executed a quick batch on ring B before a batch that reads from the same object, its obj->ring would now point to ring B, but its last_write_seqno would be still relative to ring A. This would allow for the user to read from the object before the GPU had completed the write, as set_domain would only check that ring B had passed the last_write_seqno. To fix this simply (and inelegantly), we bump the last_write_seqno when switching rings so that the last_write_seqno is always relative to the current obj->ring. This fixes igt/tests/gem_write_read_ring_switch. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org [danvet: Add note about the newly created igt which exercises this bug.] Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Diffstat (limited to 'drivers/gpu/drm/i915')
-rw-r--r--drivers/gpu/drm/i915/i915_gem.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 7f368d79f7d2..8fd8e82ebda4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1880,6 +1880,10 @@ i915_gem_object_move_to_active(struct drm_i915_gem_object *obj,
u32 seqno = intel_ring_get_seqno(ring);
BUG_ON(ring == NULL);
+ if (obj->ring != ring && obj->last_write_seqno) {
+ /* Keep the seqno relative to the current ring */
+ obj->last_write_seqno = seqno;
+ }
obj->ring = ring;
/* Add a reference if we're newly entering the active list. */