Dear all,
I’m currently working on x86 FS on classical memory system to simulate cache
system. But I found the kernel booting just hangs before loading benchmark
script. It does not work when I set 3 or more x86 timing simple cpus, but it
does work when I set them as atomic cores. And it also w
Thanks for the response Mitch. It seems like a nice way to fake a pipelined
fetch.
Amin
On Tue, Aug 26, 2014 at 10:54 AM, Mitch Hayenga <
mitch.hayenga+g...@gmail.com> wrote:
> Yep,
>
> I've thought of the need for a fully pipelined fetch as well. However my
> current method is to fake longer
Hi Users,
I am new to gem5 and I want to add nonblacking shared Last level cache(L3).
I could see L3 cache options in Options.py with default values set. However
there is no entry for L3 in Caches.py and CacheConfig.py.
So extending Cache.py and CacheConfig.py would be enough to create L3 cache?
Hi all,
It seems like using the kernel version x86_64-vmlinux-2.6.22.9.smp may have
solved my problem that was posted in this thread:
http://www.mail-archive.com/gem5-users@gem5.org/msg10387.html
However, I am using the latest gem5 version gem5-stable-aaf017eaad7d and I only
tested the atomic cpu
Yep,
I've thought of the need for a fully pipelined fetch as well. However my
current method is to fake longer instruction cache latencies by leaving the
delay as 1 cycle, but make up for it by adding additional "fetchToDecode"
delay. This makes the front-end latency and branch mispredict penal
Hi,
Looking at the codes for the fetch unit in O3, I realized that the fetch
unit does not take advantage of non-blocking i-caches. The fetch unit does
not initiate a new i-cache request while it is waiting for the an i-cache
response. Since fetch unit in O3 does not pipeline i-cache requests, fet
I'll mention that gem5 does have the foundation for parallelizing a single
simulation across multiple cores; see for example
http://repo.gem5.org/gem5/rev/2cce74fe359e. However, if you want to model
a non-trivial configuration (i.e., one where there is communication between
threads), then you have
Thank you, Andreas
*moved to gem5-users :)
On Tue, Aug 26, 2014 at 8:39 AM, Andreas Hansson
wrote:
> Hi Hussain,
>
> I’d suggest to ask on the gem5-users list for everyone’s benefit.
>
> Multi-threading invariably comes at a cost, and if you want to run say
> 10 experiments, they are embarra