I remember there was a discussion about different block sizes. It
wasn't possible at that time. I doubt if it possible now. Can you set
equal block size and see if the simulation runs correctly?
On 4/15/12, Ankita (Garg) Goel wrote:
> Hi,
>
> When I try to run SPEC2k6 or a few PARSEC benchmarks i
Hi,
When I try to run SPEC2k6 or a few PARSEC benchmarks in x86 SE mode for
larger number of instructions, I get the following assertion failure:
gem5.opt: build/X86/mem/cache/cache_impl.hh:344: bool
Cache::access(Packet*, typename TagStore::BlkType*&, int&,
PacketList&) [with TagStore = LRU]: As
I haven't change the new version yet. There maybe something wrong with
the loader. But I am not sure. Who can check that?
P.S: Dear Gabe, I think there is something wrong with the address
translator. Greatly appreciate if you check
http://permalink.gmane.org/gmane.comp.emulators.m5.users/9944
On
It's worth looking into why it doesn't find the __libc_start_main symbol
in the new version. If it's a bug we should fix it, even if it doesn't
directly have anything to do with your problem. You can also try
versions between your new and old one and see where things start
behaving poorly. This is
Hi,
Mark pointed me to 2 patches, which I think have been included in newer
versions of gem5 (I am running an older version), with which the issue of
skidbuffer has been resolved:
http://www.mail-archive.com/gem5-dev@gem5.org/msg03413.html
http://comments.gmane.org/gmane.comp.emulators.m5.devel/1
Well, in MyBench.py there is only one entry for h264_sss
h264_dir = spec_dir + '464.h264ref/exe/'
h264_bin = h264_dir + 'h264ref_base.amd64-m64-gcc44-nn'
h264_sss_data = h264_dir + 'sss_encoder_main.cfg'
h264_sss = LiveProcess()
h264_sss.executable = h264_bin
h264_sss.cmd = [h264_sss.executable] +
I suspect you're not running exactly the same binary in both cases.
__libc_start_main is one of the functions provided by glibc (if I
remember correctly) which run before main() and get some basic things
set up. If it says __libc_start_main in one, it should say it in the
other one too, unless the
I reduced the number of fast forward to 20 instructions and maxinst to
10 and turn on the ExecAll flag.
The old one looks like:
23000: system.cpu + A0 T0 : @_start+36.3 : CALL_NEAR_I : subi
rsp, rsp, 0x8 : IntAlu : D=0x7fffed38
24000: system.cpu + A0 T0 : @_start+36.4 : CALL_NEA
I am trying what you said, but can you clarify this:
Although the -F option is 20M instruction in both versions, I noticed that
the old version enters real simulation at tick 22,407,755,000 but the new
version enters at tick 90,443,309,000
I made the config files as closely as possible (same syst
- make every O3CPU parameter that is different in the new version, the same
as the old version
- check the stats file for major differences.
For example: Are the L1/L2 miss rates higher or lower? Are your caches the
same size and associativity? This is h.264, so is there a lot of floating
point in
I haven't used VExpress_ELT before, however, I'm testing it out now with
the pre-compiled kernel provided by gem5. Have you tried that in order to
rule out its being a kernel issue? I've rarely seen any chat on the mailing
lists regarding the VExpress machine types, so I expect they haven't been
d
I did that.
There are some differences and I attached them. In short, I see this:
old:
children=dcache dtb icache itb tracer workload
new:
children=dcache dtb icache interrupts itb tracer workload
Also the commitwidth, fetchwidth and some other parameters are 8 in the new
version, but they are 4
I believe the 'dotencode' message just means you should upgrade to a newer
version of mercurial.
On Sat, Apr 14, 2012 at 10:36 AM, Mahmood Naderan wrote:
> I forgot to say that I removed the 'dotencode' feature and the "hg heads"
> says:
>
> mahmood@tiger:gem5$ hg heads
> changeset: 8920:99083b
In the uniprocessor simulation, this sim_ticks will be equal to the
numcycles times clock. But in the multicore simulation, this relationship
is not working out fine. Below is the sim_ticks and numcycles that I have
obtained from the simulation of 16 cores at 2Ghz.
sim_ticks
184267036000
How about you do this...
#1 rollback to the old version. Run gem5. save the config.ini file.
#2 go the new version. Run gem5. save the config.ini file.
Finally, diff the two config.ini files and see what changed in your
configurations.
On Sat, Apr 14, 2012 at 3:02 PM, Mahmood Naderan wrote:
> T
As far as I know, for uni processor simulation:
1 tick = 1 ps
No matter what is your frequency, 1 tick is always 1ps
If you set maxtick to 10B then you are actually simulating 0.1 second
of the real execution. I suggest work around with 1 core to see the
relations.
On 4/14/12, Zheng Wu wrote:
>
The cmp.py has not been changed from the old one. However some minor
changes (master/slave names) are done while porting to the new
release.
On 4/14/12, Malek Musleh wrote:
> I missed that, my mistake. I don't know what the cmp.py script is
> doing as I don't have it in my repo, but if you look a
Hi,
If you set the CPU frequency at 1Ghz, then I believe sim tick is 1000 times
greater than num cycles. So tick 1000 is just clock cycle 1. Now if you
increase the frequency to 2 Ghz then sim tick 500 is 1 cpu clock cycle I
believe. Trying running it with different frequency to verify this but
I missed that, my mistake. I don't know what the cmp.py script is
doing as I don't have it in my repo, but if you look at the sim_insts
between the 2 stats file you posted,, there is a big difference:
new: sim_insts20767049
old: sim_insts
why?
-d is detailed
On 4/14/12, Malek Musleh wrote:
> Well first of all, you are not making first comparisons, as each of
> the runs you are using a different cpu type. So I think you should
> start with fixing that parameter first.
>
> Malek
>
> On Sat, Apr 14, 2012 at 1:34 PM, Mahmood Naderan
Well first of all, you are not making first comparisons, as each of
the runs you are using a different cpu type. So I think you should
start with fixing that parameter first.
Malek
On Sat, Apr 14, 2012 at 1:34 PM, Mahmood Naderan wrote:
> For the old one, I use:
> build/X86_SE/m5.fast configs/ex
I forgot to say that I removed the 'dotencode' feature and the "hg heads" says:
mahmood@tiger:gem5$ hg heads
changeset: 8920:99083b5b7ed4
abort: data/.hgtags.i@b151ff1fd9df: no match found!
On 4/14/12, Mahmood Naderan wrote:
> For the old one, I use:
> build/X86_SE/m5.fast configs/example/cmp
For the old one, I use:
build/X86_SE/m5.fast configs/example/cmp.py -F 2000 --maxtick
100 -d --caches --l2cache -b h264_sss --prog-interval=100
for the new one I use:
build/X86/m5.fast configs/example/cmp.py --cpu-type=detailed -F
2000 --maxtick 100 --caches --l2cache
So, with 8613:712d8bf07020 you got and IPC of 1.54, and with some version
near 8944:d062cc7a8bdf, you get an ipc of 0.093. Which CPU type are you
using?
--
Nilay
On Sat, 14 Apr 2012, Mahmood Naderan wrote:
The previous release is:
changeset: 8613:712d8bf07020
tag: tip
user:
I have to say that I used the se.py script.
Thanks for any idea/help/hint/pointer/suggestion/comment
On 4/14/12, Mahmood Naderan wrote:
> Hi,
> In previous versions, I didn't face this error. However with in the
> new versios (2 weeks ago), I get this error:
>
> fatal: Unable to find destin
The previous release is:
changeset: 8613:712d8bf07020
tag: tip
user:Nilay Vaish
date:Sat Nov 05 15:32:23 2011 -0500
summary: Tests: Update stats due to addition of fence microop
And the IPC is 1.541534
However for the new release, I am not able to find the head:
mah
How much is the difference and which versions of gem5 are you talking
about?
--
Nilay
On Sat, 14 Apr 2012, Mahmood Naderan wrote:
Hi,
In the new version, I see that the IPC of h264 (with sss input) is
very very low. However with the previous releases, this value is fine
and acceptable.
Do yo
Hi,
In the new version, I see that the IPC of h264 (with sss input) is
very very low. However with the previous releases, this value is fine
and acceptable.
Do you know how can I find the bottleneck? Which stat value shows the
weired behaviour?
ISA = x86
-F = 50,000,000
--maxtick = 10,000,000,000
Hi,
In previous versions, I didn't face this error. However with in the
new versios (2 weeks ago), I get this error:
fatal: Unable to find destination for addr 0x4000 on bus system.membus
@ cycle 0
[findPort:build/X86/mem/bus.cc, line 402]
Using "Cache" debug flag, it shows:
0: system
29 matches
Mail list logo