On Wed, Sep 21, 2022 at 12:26:17PM -0700, Matt Roper wrote:
> On Wed, Sep 21, 2022 at 12:58:08PM -0400, Kumar Valsan, Prathap wrote:
> > On Fri, Sep 16, 2022 at 07:53:40AM -0700, Matt Roper wrote:
> > > On Fri, Sep 16, 2022 at 10:02:32AM +0100, Tvrtko Ursulin wrote:
> > &g
On Fri, Sep 16, 2022 at 07:53:40AM -0700, Matt Roper wrote:
> On Fri, Sep 16, 2022 at 10:02:32AM +0100, Tvrtko Ursulin wrote:
> >
> > On 16/09/2022 02:43, Matt Roper wrote:
> > > Although the bspec lists several MMIO ranges as "MSLICE," it turns out
> > > that a subset of these are of a "GAM" subc
On Mon, May 02, 2022 at 09:34:16AM -0700, Matt Roper wrote:
> From: Lucas De Marchi
>
> As we have more copy engines now, mask all of them from aux table
> invalidate.
>
> Cc: Prathap Kumar Valsan
> Signed-off-by: Lucas De Marchi
> Signed-off-by: Matt Roper
> ---
> drivers/gpu/drm/i915/gt/ge
On Wed, Apr 27, 2022 at 09:19:24PM -0700, Matt Roper wrote:
> Compute engines have a separate register that the driver should use to
> perform MMIO-based TLB invalidation.
>
> Note that the term "context" in this register's bspec description is
> used to refer to the engine instance (in the same w
On Mon, Apr 25, 2022 at 11:41:36AM +0100, Tvrtko Ursulin wrote:
>
> On 22/04/2022 20:50, Matt Roper wrote:
> > We're now ready to start exposing compute engines to userspace.
> >
> > While we're at it, let's extend the kerneldoc description for the other
> > engine types as well.
> >
> > Cc: Dan
On Thu, Oct 24, 2019 at 08:13:29AM +0100, Chris Wilson wrote:
> Quoting Kumar Valsan, Prathap (2019-10-23 22:03:40)
> > On Tue, Oct 22, 2019 at 12:57:05PM +0100, Chris Wilson wrote:
> > > Probe the mocs registers for new contexts and across GPU resets. Similar
> > > to
On Wed, Oct 23, 2019 at 01:21:51PM +0100, Chris Wilson wrote:
> Normally, we rely on our hangcheck to prevent persistent batches from
> hogging the GPU. However, if the user disables hangcheck, this mechanism
> breaks down. Despite our insistence that this is unsafe, the users are
> equally insiste
On Tue, Oct 22, 2019 at 12:57:05PM +0100, Chris Wilson wrote:
> Probe the mocs registers for new contexts and across GPU resets. Similar
> to intel_workarounds, we have tables of what register values we expect
> to see, so verify that user contexts are affected by them. In the
> future, we should a
On Sat, Oct 19, 2019 at 12:20:18AM +0100, Chris Wilson wrote:
> Quoting Kumar Valsan, Prathap (2019-10-19 00:24:13)
> > On Fri, Oct 18, 2019 at 11:14:39PM +0100, Chris Wilson wrote:
> > > +static int check_l3cc_table(struct intel_engine_cs *engine,
> > > +
On Fri, Oct 18, 2019 at 11:14:39PM +0100, Chris Wilson wrote:
> Probe the mocs registers for new contexts and across GPU resets. Similar
> to intel_workarounds, we have tables of what register values we expect
> to see, so verify that user contexts are affected by them. In the
> future, we should a
On Fri, Oct 18, 2019 at 01:06:39PM +0100, Chris Wilson wrote:
> Probe the mocs registers for new contexts and across GPU resets. Similar
> to intel_workarounds, we have tables of what register values we expect
> to see, so verify that user contexts are affected by them. In the
> future, we should a
On Wed, Oct 16, 2019 at 09:42:36AM +0100, Chris Wilson wrote:
> Quoting Prathap Kumar Valsan (2019-10-16 05:05:58)
> > Gen12 has L3 MOCS in engine reset domain, having us to re-initialize on
> > an engine reset.
>
> Hmm, aiui we can do this by removing half of intel_mocs.c...
Right. Tested "Do ini
On Wed, Oct 16, 2019 at 10:07:49AM +0100, Chris Wilson wrote:
> Now that we record the default "goldenstate" context, we do not need to
> emit the mocs registers at the start of each context and can simply do
> mmio before the first context and capture the registers as part of its
> default image.
On Mon, Oct 07, 2019 at 09:37:20PM +0100, Chris Wilson wrote:
> Quoting Prathap Kumar Valsan (2019-10-07 17:52:09)
> > To provide shared last-level-cache isolation to cpu workloads running
> > concurrently with gpu workloads, the gpu allocation of cache lines needs
> > to be restricted to certain w
On Mon, Sep 09, 2019 at 02:50:20PM +0300, Joonas Lahtinen wrote:
> Quoting Prathap Kumar Valsan (2019-08-26 01:48:01)
> > To provide shared last-level-cache isolation to cpu workloads running
> > concurrently with gpu workloads, the gpu allocation of cache lines needs
> > to be restricted to certai
On Tue, Aug 27, 2019 at 03:35:14PM +0100, Chris Wilson wrote:
> Quoting Kumar Valsan, Prathap (2019-08-27 15:17:51)
> > We want to support this on Gen11 as well, where these registers
> > are context saved and restored and we prime the register values of new
> > conte
On Mon, Aug 26, 2019 at 10:17:55AM +0100, Chris Wilson wrote:
> Quoting Prathap Kumar Valsan (2019-08-26 00:35:27)
> > To provide shared last-level-cache isolation to cpu workloads running
> > concurrently with gpu workloads, the gpu allocation of cache lines needs
> > to be restricted to certain w
On Mon, Aug 26, 2019 at 09:39:48AM +0100, Chris Wilson wrote:
> Quoting Prathap Kumar Valsan (2019-08-26 00:35:27)
> > To provide shared last-level-cache isolation to cpu workloads running
> > concurrently with gpu workloads, the gpu allocation of cache lines needs
> > to be restricted to certain w
On Wed, Aug 07, 2019 at 01:55:56PM -0700, Stuart Summers wrote:
> Reduce code by defaulting to true in the MOCS settings
> initialization function. Set to false for unexpected
> platforms.
>
> Signed-off-by: Stuart Summers
Reviewed-by:Prathap Kumar Valsan
> ---
> drivers/gpu/drm/i915/gt/intel_
On Wed, Aug 07, 2019 at 01:55:55PM -0700, Stuart Summers wrote:
> User applications might need to verify hardware configuration
> of the MOCS entries. To facilitate this debug, add a new debugfs
> entry to allow a dump of the MOCS state to verify expected values
> are set by i915.
>
> Signed-off-b
On Wed, Aug 07, 2019 at 05:00:29PM +, Michal Wajdeczko wrote:
> There is no point in selecting HuC firmware if GuC is unsupported
> or it was already disabled, as we need GuC to authenticate HuC.
>
We are calling intel_uc_init() irrespctive of intel_uc_fetch_firmwares() is
successful. Is thi
On Fri, Aug 02, 2019 at 09:30:43PM +0100, Chris Wilson wrote:
> Quoting Prathap Kumar Valsan (2019-08-02 21:41:11)
> > Currently i915_vma_insert() is responsible for allocating drm mm node
> > and also allocating or gathering physical pages. Move the latter to a
> > separate function for better rea
On Fri, Jun 14, 2019 at 08:10:16AM +0100, Chris Wilson wrote:
> If we let pages be allocated asynchronously, we also then want to push
> the binding process into an asynchronous task. Make it so, utilising the
> recent improvements to fence error tracking and struct_mutex reduction.
>
> Signed-off
On Tue, Jul 30, 2019 at 12:21:51PM +0100, Chris Wilson wrote:
> Recently discovered in commit bdae33b8b82b ("drm/i915: Use maximum write
> flush for pwrite_gtt") was that we needed to our full write barrier
> before changing the GGTT PTE to ensure that our indirect writes through
> the GTT landed b
On Tue, Jul 30, 2019 at 12:21:51PM +0100, Chris Wilson wrote:
> Recently discovered in commit bdae33b8b82b ("drm/i915: Use maximum write
> flush for pwrite_gtt") was that we needed to our full write barrier
> before changing the GGTT PTE to ensure that our indirect writes through
> the GTT landed b
On Tue, Jul 30, 2019 at 02:30:26PM +0100, Chris Wilson wrote:
> We only compute the lrc_descriptor() on pinning the context, i.e.
> infrequently, so we do not benefit from storing the template as the
> addressing mode is also fixed for the lifetime of the intel_context.
>
> Signed-off-by: Chris Wi
On Mon, Jul 15, 2019 at 09:09:43AM +0100, Chris Wilson wrote:
> We only need to keep a unique tag for the active lifetime of the
> context, and for as long as we need to identify that context. The HW
> uses the tag to determine if it should use a lite-restore (why not the
> LRCA?) and passes the ta
27 matches
Mail list logo