Oh, I have revised the version number in syscall uname from 2.6.16.19 to
2.6.22.9 in src/arch/x86/linux/syscalls.cc.

On Sat, Jun 9, 2012 at 1:19 AM, Mahmood Naderan <mahmood...@gmail.com>wrote:

> I think there a problem with your binary. I ran your binary with my
> revision (8920) but get this error:
>
> 0: system.remote_gdb.listener: listening for remote gdb on port 7000
> **** REAL SIMULATION ****
> info: Entering event queue @ 0.  Starting simulation...
> FATAL: kernel too old
> panic: Tried to read unmapped address 0xffffffffffffffd0.
>  @ cycle 6273000
>
>
> However there is no problem with my binary. I will send my binary to you.
>
>
>
> On 6/8/12, mingkai huang <huangming...@gmail.com> wrote:
> > No fast forward and max inst. I compiled the binary in SPEC2006 and I
> > didn't modify the source file. The input is from test input.
> > I have posted the link to download the binary and input I used in my
> > previous email, and I wrote the command I used in my first email.
> >
> > My command line is:
> > build/X86/gem5.fast configs/example/se.py --cpu-type=detailed --caches -c
> > bzip2 -o "input.program 5"
> >
> > The version of my gem5 is 8981.
> > The os I used is RHEL 6.2.
> >
> > The link to my binary and input:
> >
> http://mail.qq.com/cgi-bin/ftnExs_download?k=0962383845ebe89e745811294262054e53570059515a0e041a555c0f544f0356540315015355534c0f530f01535253000556080a646b37034d0b480a4a105613375f&t=exs_ftn_download&code=7b88db7a
> >
> > On Thu, Jun 7, 2012 at 12:57 PM, Mahmood Naderan
> > <mahmood...@gmail.com>wrote:
> >
> >> what input do you use? have you modify source code? How many fast
> >> forward? how many max inst/tick?
> >> i have not such problem with bzip2 for -F 2000000000 --maxtick
> >> 100000000000
> >>
> >> On 6/7/12, mingkai huang <huangming...@gmail.com> wrote:
> >> > Thanks! I tried that, but it seems doing that can't fix this problem.
> >> > The output of bzip2 going wrong is many "info: Increasing stack size
> by
> >> one
> >> > page." followed by "fatal: Over max stack size for one thread". I
> think
> >> > that bzip2 goes into a dead loop and increases the stack for ever. No
> >> > matter how much stack size set, the stack will be running out.
> >> >
> >> > On Sun, Jun 3, 2012 at 4:23 PM, Mahmood Naderan
> >> > <mahmood...@gmail.com>wrote:
> >> >
> >> >> I faced that before. Thank to Ali, the problem is now fixed
> >> >> You should modify two files:
> >> >>
> >> >> 1) src/sim/process.cc
> >> >> you should find something like this:
> >> >>  if (stack_base - stack_min > 8 * 1024 * 1024)
> >> >>     fatal("Over max stack size for one thread\n");
> >> >>
> >> >>
> >> >> 2) src/arch/x86/process.cc
> >> >> you should find two occurrence of this statement
> >> >>    next_thread_stack_base = stack_base - (8 * 1024 * 1024);
> >> >>
> >> >> Now change the right side from 8*1024*1024 to whatever you want.
> >> >> 32*1024*1024 is enough I think.
> >> >>
> >> >> Hope that help
> >> >>
> >> >> On 6/3/12, mingkai huang <huangming...@gmail.com> wrote:
> >> >> > Hi,
> >> >> > I am sorry to be too late to rely.
> >> >> > I tracediffed the output of 8842 and 8841 and attached the output.
> I
> >> >> > revised one place of the output format in 8841 to make the output
> >> >> > more
> >> >> > similar, but it seems the format changed a lot, and the diff may
> not
> >> >> > be helpful. I also put a breakpoint, printed out the stack trace,
> >> >> > and
> >> >> > attached the output.
> >> >> > This is the bzip2 and input I used:
> >> >> >
> >> >>
> >>
> http://mail.qq.com/cgi-bin/ftnExs_download?k=0962383845ebe89e745811294262054e53570059515a0e041a555c0f544f0356540315015355534c0f530f01535253000556080a646b37034d0b480a4a105613375f&t=exs_ftn_download&code=7b88db7a
> >> >> > Because of mail list size limitation, I use the qq large file
> >> >> > attachment.
> >> >> > Thanks!
> >> >> >
> >> >> > On Thu, May 17, 2012 at 6:49 AM, Steve Reinhardt <ste...@gmail.com
> >
> >> >> wrote:
> >> >> >> Hi Mingkai,
> >> >> >>
> >> >> >> Can you run under gdb, put a breakpoint on this fatal statement
> >> (which
> >> >> is
> >> >> >> in
> >> >> >> Process::fixupStackFault() in sim/process.cc), print out the stack
> >> >> >> trace
> >> >> >> when you hit it, and mail that to the list?
> >> >> >>
> >> >> >> I wonder if the new branch predictor is causing some different
> >> >> wrong-path
> >> >> >> execution, and that we are erroneously calling fatal() on
> something
> >> >> >> that
> >> >> >> looks like a stack fault but is actually a misspeculated
> >> >> >> instruction.
> >> >> >>
> >> >> >> Given that all the regressions pass, I doubt the new branch
> >> >> >> predictor
> >> >> >> is
> >> >> >> actually changing the committed execution path.  That's why I
> think
> >> it
> >> >> >> may
> >> >> >> have something to do with a bug in how we handle misspeculation.
> >> >> >>
> >> >> >> If anyone knows the code well enough to say whether this seems
> >> >> >> likely
> >> >> >> or
> >> >> >> unlikely, that would be helpful.
> >> >> >>
> >> >> >> Steve
> >> >> >>
> >> >> >>
> >> >> >> On Wed, May 16, 2012 at 3:09 PM, Geoffrey Blake <bla...@umich.edu
> >
> >> >> wrote:
> >> >> >>>
> >> >> >>> Unfortunately the CheckerCPU does not work for x86 and is only
> >> >> >>> verified as working on ARM. It needs some additional work to
> >> >> >>> support
> >> >> >>> the representation of machine instructions for x86.
> >> >> >>>
> >> >> >>> Geoff
> >> >> >>>
> >> >> >>> On Tue, May 15, 2012 at 7:43 AM, Gabe Black
> >> >> >>> <gbl...@eecs.umich.edu>
> >> >> >>> wrote:
> >> >> >>> > The change may have made the branch predictor code behave
> >> >> incorrectly,
> >> >> >>> > for instance an instruction could execute twice, a
> misspeculated
> >> >> >>> > instruction could sneak through and commit, an instruction
> could
> >> be
> >> >> >>> > skipped, a branch could be "corrected" to go down the wrong
> >> >> >>> > path.
> >> >> >>> > There
> >> >> >>> > are lots of things that could go wrong. Alternatively, the
> >> >> >>> > branch
> >> >> >>> > predictor might have just gotten better and put more stress on
> >> some
> >> >> >>> > other part of the CPU, or coincidentally lined up circumstances
> >> >> >>> > which
> >> >> >>> > expose another bug. You should try to find where execution
> >> diverges
> >> >> >>> > between O3 and the atomic CPU, possibly using tracediff or
> >> possibly
> >> >> >>> > using the checker CPU. I'm not sure the checker works correctly
> >> >> >>> > with
> >> >> >>> > x86, but if it does this is pretty much exactly what it's for.
> >> >> >>> >
> >> >> >>> > Gabe
> >> >> >>> >
> >> >> >>> > On 05/14/12 17:22, mingkai huang wrote:
> >> >> >>> >> Hi,
> >> >> >>> >> I tried to use gem5 to run SPEC2006 in x86 O3 mode. When I ran
> >> >> bzip2,
> >> >> >>> >> it failed with:
> >> >> >>> >> fatal: Over max stack size for one thread
> >> >> >>> >> My command line is:
> >> >> >>> >> build/X86/gem5.fast configs/example/se.py --cpu-type=detailed
> >> >> >>> >> --caches
> >> >> >>> >> -c bzip2 -o "input.program 5"
> >> >> >>> >> The version of my gem5 is 8981.
> >> >> >>> >> Bzip2 can run correctly in atomic mode.
> >> >> >>> >> I binary searched where the problem happened first, and found
> >> >> version
> >> >> >>> >> 8842. I noticed this patch is about branch prediction, and I
> >> don't
> >> >> >>> >> understand why this can affect the correctness of an
> >> >> >>> >> application.
> >> >> >>> >> Before 8842, Bzip2 can run correctly in both mode, but the
> >> >> >>> >> outputed
> >> >> >>> >> numbers of "info: Increasing stack size by one page." are not
> >> >> >>> >> equal.
> >> >> >>> >> Because of email size limitation, I can't attached the file I
> >> >> >>> >> used.
> >> >> >>> >>
> >> >> >>> >
> >> >> >>> > _______________________________________________
> >> >> >>> > gem5-users mailing list
> >> >> >>> > gem5-users@gem5.org
> >> >> >>> > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >> >> >>> _______________________________________________
> >> >> >>> gem5-users mailing list
> >> >> >>> gem5-users@gem5.org
> >> >> >>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> _______________________________________________
> >> >> >> gem5-users mailing list
> >> >> >> gem5-users@gem5.org
> >> >> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Best regards,
> >> >> > Mingkai Huang
> >> >> >
> >> >>
> >> >>
> >> >> --
> >> >> // Naderan *Mahmood;
> >> >> _______________________________________________
> >> >> gem5-users mailing list
> >> >> gem5-users@gem5.org
> >> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > Best regards,
> >> > Mingkai Huang
> >> >
> >>
> >>
> >> --
> >> // Naderan *Mahmood;
> >> _______________________________________________
> >> gem5-users mailing list
> >> gem5-users@gem5.org
> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> >>
> >
> >
> >
> > --
> > Best regards,
> > Mingkai Huang
> >
>
>
> --
> // Naderan *Mahmood;
> _______________________________________________
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 
Best regards,
Mingkai Huang
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to