Best of my understanding, it will be option 1 ( L1 has received the A1)
assuming classic cache . However for latency ( assuming read latency) , it
depends on how blocking mechanism is implemented fro blocking cache ( I
don't know how blocking mechanism works for cache)
On Tue, Mar 18, 2025 at 8:07 PM Nazmus Sakib via gem5-users <
gem5-users@gem5.org> wrote:

> Hello.
> Suppose I have 4 addresses going to cache from cpu, A1, A2, A3 and A4. All
> are read requests.
> Assume, my L1 cache is blocking.
> Now, load store unit will first sent A1.
> if it is a miss, the L1 will send it to L2. If it is a L2 hit, it will
> send the data back to L1, then L1 will send it back to cpu.
>
> My question is, when will  A2 be sent ?
> 1) When the memsideport of cpu becomes available, meaning once the L1 has
> received the A1 (and trying to determine hit/miss etc)  ?
> In this case, what will be the delay ? The latency of the crossbar/bus
> connecting L1 and L2 ?
>
> 2) Or will the cpu wait until it has received the data for A1 from the L1
> ? Meaning at any given cycle, that cache object can deal with one single
> request ?
>
> Also, can the cache in gem5 send multiple misses to lower level memories
> in the same cycle (assuming all MSHR misses)?
>
> _______________________________________________
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
>
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org

Reply via email to