I just I remembered the other real problem with implementing fully
pipelined fetch. We'd have to re-do how the branch predictors are
currently done.
Currently the gem5 branch predictors "cheat" in that they have full access
to the decoded instruction (unconditional, indirect, RAS push/pop, etc).
Thanks for the response Mitch. It seems like a nice way to fake a pipelined
fetch.
Amin
On Tue, Aug 26, 2014 at 10:54 AM, Mitch Hayenga <
mitch.hayenga+g...@gmail.com> wrote:
> Yep,
>
> I've thought of the need for a fully pipelined fetch as well. However my
> current method is to fake longer
Yep,
I've thought of the need for a fully pipelined fetch as well. However my
current method is to fake longer instruction cache latencies by leaving the
delay as 1 cycle, but make up for it by adding additional "fetchToDecode"
delay. This makes the front-end latency and branch mispredict penal
Hi,
Looking at the codes for the fetch unit in O3, I realized that the fetch
unit does not take advantage of non-blocking i-caches. The fetch unit does
not initiate a new i-cache request while it is waiting for the an i-cache
response. Since fetch unit in O3 does not pipeline i-cache requests, fet