what input do you use? have you modify source code? How many fast forward? how many max inst/tick? i have not such problem with bzip2 for -F 2000000000 --maxtick 100000000000
On 6/7/12, mingkai huang <huangming...@gmail.com> wrote: > Thanks! I tried that, but it seems doing that can't fix this problem. > The output of bzip2 going wrong is many "info: Increasing stack size by one > page." followed by "fatal: Over max stack size for one thread". I think > that bzip2 goes into a dead loop and increases the stack for ever. No > matter how much stack size set, the stack will be running out. > > On Sun, Jun 3, 2012 at 4:23 PM, Mahmood Naderan > <mahmood...@gmail.com>wrote: > >> I faced that before. Thank to Ali, the problem is now fixed >> You should modify two files: >> >> 1) src/sim/process.cc >> you should find something like this: >> if (stack_base - stack_min > 8 * 1024 * 1024) >> fatal("Over max stack size for one thread\n"); >> >> >> 2) src/arch/x86/process.cc >> you should find two occurrence of this statement >> next_thread_stack_base = stack_base - (8 * 1024 * 1024); >> >> Now change the right side from 8*1024*1024 to whatever you want. >> 32*1024*1024 is enough I think. >> >> Hope that help >> >> On 6/3/12, mingkai huang <huangming...@gmail.com> wrote: >> > Hi, >> > I am sorry to be too late to rely. >> > I tracediffed the output of 8842 and 8841 and attached the output. I >> > revised one place of the output format in 8841 to make the output more >> > similar, but it seems the format changed a lot, and the diff may not >> > be helpful. I also put a breakpoint, printed out the stack trace, and >> > attached the output. >> > This is the bzip2 and input I used: >> > >> http://mail.qq.com/cgi-bin/ftnExs_download?k=0962383845ebe89e745811294262054e53570059515a0e041a555c0f544f0356540315015355534c0f530f01535253000556080a646b37034d0b480a4a105613375f&t=exs_ftn_download&code=7b88db7a >> > Because of mail list size limitation, I use the qq large file >> > attachment. >> > Thanks! >> > >> > On Thu, May 17, 2012 at 6:49 AM, Steve Reinhardt <ste...@gmail.com> >> wrote: >> >> Hi Mingkai, >> >> >> >> Can you run under gdb, put a breakpoint on this fatal statement (which >> is >> >> in >> >> Process::fixupStackFault() in sim/process.cc), print out the stack >> >> trace >> >> when you hit it, and mail that to the list? >> >> >> >> I wonder if the new branch predictor is causing some different >> wrong-path >> >> execution, and that we are erroneously calling fatal() on something >> >> that >> >> looks like a stack fault but is actually a misspeculated instruction. >> >> >> >> Given that all the regressions pass, I doubt the new branch predictor >> >> is >> >> actually changing the committed execution path. That's why I think it >> >> may >> >> have something to do with a bug in how we handle misspeculation. >> >> >> >> If anyone knows the code well enough to say whether this seems likely >> >> or >> >> unlikely, that would be helpful. >> >> >> >> Steve >> >> >> >> >> >> On Wed, May 16, 2012 at 3:09 PM, Geoffrey Blake <bla...@umich.edu> >> wrote: >> >>> >> >>> Unfortunately the CheckerCPU does not work for x86 and is only >> >>> verified as working on ARM. It needs some additional work to support >> >>> the representation of machine instructions for x86. >> >>> >> >>> Geoff >> >>> >> >>> On Tue, May 15, 2012 at 7:43 AM, Gabe Black <gbl...@eecs.umich.edu> >> >>> wrote: >> >>> > The change may have made the branch predictor code behave >> incorrectly, >> >>> > for instance an instruction could execute twice, a misspeculated >> >>> > instruction could sneak through and commit, an instruction could be >> >>> > skipped, a branch could be "corrected" to go down the wrong path. >> >>> > There >> >>> > are lots of things that could go wrong. Alternatively, the branch >> >>> > predictor might have just gotten better and put more stress on some >> >>> > other part of the CPU, or coincidentally lined up circumstances >> >>> > which >> >>> > expose another bug. You should try to find where execution diverges >> >>> > between O3 and the atomic CPU, possibly using tracediff or possibly >> >>> > using the checker CPU. I'm not sure the checker works correctly >> >>> > with >> >>> > x86, but if it does this is pretty much exactly what it's for. >> >>> > >> >>> > Gabe >> >>> > >> >>> > On 05/14/12 17:22, mingkai huang wrote: >> >>> >> Hi, >> >>> >> I tried to use gem5 to run SPEC2006 in x86 O3 mode. When I ran >> bzip2, >> >>> >> it failed with: >> >>> >> fatal: Over max stack size for one thread >> >>> >> My command line is: >> >>> >> build/X86/gem5.fast configs/example/se.py --cpu-type=detailed >> >>> >> --caches >> >>> >> -c bzip2 -o "input.program 5" >> >>> >> The version of my gem5 is 8981. >> >>> >> Bzip2 can run correctly in atomic mode. >> >>> >> I binary searched where the problem happened first, and found >> version >> >>> >> 8842. I noticed this patch is about branch prediction, and I don't >> >>> >> understand why this can affect the correctness of an application. >> >>> >> Before 8842, Bzip2 can run correctly in both mode, but the >> >>> >> outputed >> >>> >> numbers of "info: Increasing stack size by one page." are not >> >>> >> equal. >> >>> >> Because of email size limitation, I can't attached the file I >> >>> >> used. >> >>> >> >> >>> > >> >>> > _______________________________________________ >> >>> > gem5-users mailing list >> >>> > gem5-users@gem5.org >> >>> > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users >> >>> _______________________________________________ >> >>> gem5-users mailing list >> >>> gem5-users@gem5.org >> >>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users >> >> >> >> >> >> >> >> _______________________________________________ >> >> gem5-users mailing list >> >> gem5-users@gem5.org >> >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users >> > >> > >> > >> > -- >> > Best regards, >> > Mingkai Huang >> > >> >> >> -- >> // Naderan *Mahmood; >> _______________________________________________ >> gem5-users mailing list >> gem5-users@gem5.org >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users >> > > > > -- > Best regards, > Mingkai Huang > -- // Naderan *Mahmood; _______________________________________________ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users