[gem5-users] How many cpu does the x86 vmlinux SMP kernel support?

2014-08-26 Thread Chao Zhang via gem5-users
Dear all, I’m currently working on x86 FS on classical memory system to simulate cache system. But I found the kernel booting just hangs before loading benchmark script. It does not work when I set 3 or more x86 timing simple cpus, but it does work when I set them as atomic cores. And it also w

Re: [gem5-users] O3 fetch throughput when i-cache hit latency is more than 1 cycle

2014-08-26 Thread Amin Farmahini via gem5-users
Thanks for the response Mitch. It seems like a nice way to fake a pipelined fetch. Amin On Tue, Aug 26, 2014 at 10:54 AM, Mitch Hayenga < mitch.hayenga+g...@gmail.com> wrote: > Yep, > > I've thought of the need for a fully pipelined fetch as well. However my > current method is to fake longer

[gem5-users] How to add shared nonblocking L3 cache in gem5?

2014-08-26 Thread Prathap Kolakkampadath via gem5-users
Hi Users, I am new to gem5 and I want to add nonblacking shared Last level cache(L3). I could see L3 cache options in Options.py with default values set. However there is no entry for L3 in Caches.py and CacheConfig.py. So extending Cache.py and CacheConfig.py would be enough to create L3 cache?

[gem5-users] Kernel version vs Gem5 version

2014-08-26 Thread Fulya via gem5-users
Hi all, It seems like using the kernel version x86_64-vmlinux-2.6.22.9.smp may have solved my problem that was posted in this thread: http://www.mail-archive.com/gem5-users@gem5.org/msg10387.html However, I am using the latest gem5 version gem5-stable-aaf017eaad7d and I only tested the atomic cpu

Re: [gem5-users] O3 fetch throughput when i-cache hit latency is more than 1 cycle

2014-08-26 Thread Mitch Hayenga via gem5-users
Yep, I've thought of the need for a fully pipelined fetch as well. However my current method is to fake longer instruction cache latencies by leaving the delay as 1 cycle, but make up for it by adding additional "fetchToDecode" delay. This makes the front-end latency and branch mispredict penal

[gem5-users] O3 fetch throughput when i-cache hit latency is more than 1 cycle

2014-08-26 Thread Amin Farmahini via gem5-users
Hi, Looking at the codes for the fetch unit in O3, I realized that the fetch unit does not take advantage of non-blocking i-caches. The fetch unit does not initiate a new i-cache request while it is waiting for the an i-cache response. Since fetch unit in O3 does not pipeline i-cache requests, fet

Re: [gem5-users] Gem5 on multiple cores

2014-08-26 Thread Steve Reinhardt via gem5-users
I'll mention that gem5 does have the foundation for parallelizing a single simulation across multiple cores; see for example http://repo.gem5.org/gem5/rev/2cce74fe359e. However, if you want to model a non-trivial configuration (i.e., one where there is communication between threads), then you have

Re: [gem5-users] Gem5 on multiple cores

2014-08-26 Thread Hussain Asad via gem5-users
Thank you, Andreas *moved to gem5-users :) On Tue, Aug 26, 2014 at 8:39 AM, Andreas Hansson wrote: > Hi Hussain, > > I’d suggest to ask on the gem5-users list for everyone’s benefit. > > Multi-threading invariably comes at a cost, and if you want to run say > 10 experiments, they are embarra