summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/i915/i915_gem_render_state.c
AgeCommit message (Collapse)Author
2015-12-11drm/i915: mark GEM object pages dirty when mapped & written by the CPUDave Gordon
In various places, a single page of a (regular) GEM object is mapped into CPU address space and updated. In each such case, either the page or the the object should be marked dirty, to ensure that the modifications are not discarded if the object is evicted under memory pressure. The typical sequence is: va = kmap_atomic(i915_gem_object_get_page(obj, pageno)); *(va+offset) = ... kunmap_atomic(va); Here we introduce i915_gem_object_get_dirty_page(), which performs the same operation as i915_gem_object_get_page() but with the side-effect of marking the returned page dirty in the pagecache. This will ensure that if the object is subsequently evicted (due to memory pressure), the changes are written to backing store rather than discarded. Note that it works only for regular (shmfs-backed) GEM objects, but (at least for now) those are the only ones that are updated in this way -- the objects in question are contexts and batchbuffers, which are always shmfs-backed. Separate patches deal with the cases where whole objects are (or may be) dirtied. v3: Mark two more pages dirty in the page-boundary-crossing cases of the execbuffer relocation code [Chris Wilson] Signed-off-by: Dave Gordon <david.s.gordon@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1449773486-30822-2-git-send-email-david.s.gordon@intel.com Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-21drm/i915: Add provision to extend Golden context batchArun Siluvery
The Golden batch carries 3D state at the beginning so that HW starts with a known state. It is carried as a binary blob which is auto-generated from source. The idea was it would be easier to maintain and keep the complexity out of the kernel which makes sense as we don't really touch it. However if you really need to update it then you need to update generator source and keep the binary blob in sync with it. There is a need to patch this in bxt to send one additional command to enable a feature. A solution was to patch the binary data with some additional data structures (included as part of auto-generator source) but it was unnecessarily complicated. Chris suggested the idea of having a secondary batch and execute two batch buffers. It has clear advantages as we needn't touch the base golden batch, can customize secondary/auxiliary batch depending on Gen and can be carried in the driver with no dependencies. This patch adds support for this auxiliary batch which is inserted at the end of golden batch and is completely independent from it. Thanks to Mika for the preliminary review. v2: Strictly conform to the batch size requirements to cover Gen2 and add comments to clarify overflow check in macro (Chris, Mika). v3: aux_batch_offset was declared as u64, change it to u32 (Chris) Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Armin Reese <armin.c.reese@intel.com> Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-07-21drm/i915: Do kunmap if renderstate parsing failsMika Kuoppala
Kunmap the renderstate page on error path. Reviewed-by: Arun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update ring->dispatch_execbuffer() to take a request structureJohn Harrison
Updated the various ring->dispatch_execbuffer() implementations to take a request instead of a ring. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update [vma|object]_move_to_active() to take request structuresJohn Harrison
Now that everything above has been converted to use request structures, it is possible to update the lower level move_to_active() functions to be request based as well. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Update render_state_init() to take a request structureJohn Harrison
Updated the two render_state_init() functions to take a request pointer instead of a ring. This removes their reliance on the OLR. v2: Rebased to newer tree. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Add explicit request management to i915_gem_init_hw()John Harrison
Now that a single per ring loop is being done for all the different intialisation steps in i915_gem_init_hw(), it is possible to add proper request management as well. The last remaining issue is that the context enable call eventually ends up within *_render_state_init() and this does its own private _i915_add_request() call. This patch adds explicit request creation and submission to the top level loop and removes the add_request() from deep within the sub-functions. v2: Updated for removal of batch_obj from add_request call in previous patch. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Don't tag kernel batches as user batchesJohn Harrison
The render state initialisation code does an explicit i915_add_request() call to commit the init commands. It was passing in the initialisation batch buffer to add_request() as the batch object parameter. However, the batch object entry in the request structure (which is all that parameter is used for) is meant for keeping track of user generated batch buffers for blame tagging during GPU hangs. This patch clears the batch object parameter so that kernel generated batch buffers are not tagged as being user generated. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: Add flag to i915_add_request() to skip the cache flushJohn Harrison
In order to explcitly track all GPU work (and completely remove the outstanding lazy request), it is necessary to add extra i915_add_request() calls to various places. Some of these do not need the implicit cache flush done as part of the standard batch buffer submission process. This patch adds a flag to _add_request() to specify whether the flush is required or not. For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-06-23drm/i915: i915_add_request must not failJohn Harrison
The i915_add_request() function is called to keep track of work that has been written to the ring buffer. It adds epilogue commands to track progress (seqno updates and such), moves the request structure onto the right list and other such house keeping tasks. However, the work itself has already been written to the ring and will get executed whether or not the add request call succeeds. So no matter what goes wrong, there isn't a whole lot of point in failing the call. At the moment, this is fine(ish). If the add request does bail early on and not do the housekeeping, the request will still float around in the ring->outstanding_lazy_request field and be picked up next time. It means multiple pieces of work will be tagged as the same request and driver can't actually wait for the first piece of work until something else has been submitted. But it all sort of hangs together. This patch series is all about removing the OLR and guaranteeing that each piece of work gets its own personal request. That means that there is no more 'hoovering up of forgotten requests'. If the request does not get tracked then it will be leaked. Thus the add request call _must_ not fail. The previous patch should have already ensured that it _will_ not fail by removing the potential for running out of ring space. This patch enforces the rule by actually removing the early exit paths and the return code. Note that if something does manage to fail and the epilogue commands don't get written to the ring, the driver will still hang together. The request will be added to the tracking lists. And as in the old case, any subsequent work will generate a new seqno which will suffice for marking the old one as complete. v2: Improved WARNings (Tomas Elf review request). For: VIZ-5115 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Tomas Elf <tomas.elf@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-12-03drm/i915: Remove obsolete seqno parameter from 'i915_add_request'John Harrison
There is no longer any need to retrieve a seqno value from an i915_add_request() call. The calling code already knows which request structure is being processed (it can only be ring->OLR). And as the request itself is now used in preference to the basic seqno value, the latter is now redundant in this situation. For: VIZ-4377 Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-11-04drm/i915 Add golden context support for Gen9Armin Reese
This patch includes the Gen9 batch buffer to generate a 'golden context' for that product family. Signed-off-by: Armin Reese <armin.c.reese@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-09-03drm/i915/bdw: Render state init for ExeclistsOscar Mateo
The batchbuffer that sets the render context state is submitted in a different way, and from different places. We needed to make both the render state preparation and free functions outside accesible, and namespace accordingly. This mess is so that all LR, LRC and Execlists functionality can go together in intel_lrc.c: we can fix all of this later on, once the interfaces are clear. v2: Create a separate ctx->rcs_initialized for the Execlists case, as suggested by Chris Wilson. Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> v3: Setup ring status page in lr_context_deferred_create when the default context is being created. This means that the render state init for the default context is no longer a special case. Execute deferred creation of the default context at the end of logical_ring_init to allow the render state commands to be submitted. Fix style errors reported by checkpatch. Rebased. Signed-off-by: Thomas Daniel <thomas.daniel@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-06-16drm/i915: Simplify processing of the golden render context stateChris Wilson
Rewrite i915_gem_render_state.c for the purposes of clarity and compactness, in the process we can eliminate some dodgy math that did not handle 64bit addresses correctly. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Damien Lespiau <damien.lespiau@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-22drm/i915: s/intel_ring_buffer/intel_engine_csOscar Mateo
In the upcoming patches we plan to break the correlation between engine command streamers (a.k.a. rings) and ringbuffers, so it makes sense to refactor the code and make the change obvious. No functional changes. Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-22drm/i915: Add null state batch to active listMika Kuoppala
for proper refcounting to take place as we use i915_add_request() for it. i915_add_request() also takes the context for the request from ring->last_context so move the null state batch submission after the ring context has been set. v2: we need to check for correct ring now (Ville Syrjälä) v3: no need to expose i915_gem_move_object_to_active (Chris Wilson) v4: cargoculted vma/active/inactive error handling removed (Chris Wilson) Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-14drm/i915: add render state initializationMika Kuoppala
HW guys say that it is not a cool idea to let device go into rc6 without proper 3d pipeline state. For each new uninitialized context, generate a valid null render state to be run on context creation. This patch introduces a skeleton with empty states. v2: - No need to vmap (Chris Wilson) - use .c files for state (Daniel Vetter) - no need to flush as i915_add_request does it - remove parameter for batch alloc size - don't wait for the init (Ben Widawsky) v3: - move to cpu/gpu (Chris Wilson) Tested-by: Kristen Carlson Accardi <kristen@linux.intel.com> (v1) Tested-by: Oscar Mateo <oscar.mateo@intel.com> Reviewed-by: Damien Lespiau <damien.lespiau@intel.com> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>