Hi all,
I am running Parsec with multicore using X86 FS with Ruby memory model and the 
latest stable gem5 version. I successfully took a checkpoint using the “timing 
simple cpu” and then restored from the checkpoint using 
“—restore_with_cpu=detailed” option.
My first question is what is the difference between using “restore_with_cpu” 
option and “cpu_type” option in terms of functionality? (When I tried to 
restore using “cpu_type”, I got an error.)
My end goal is to restore from the checkpoint and run the rest of the benchmark 
using the detailed cpu. In order to check if detailed cpu is actually used 
after switching, I put a printf statement in /src/cpu/o3/cpu.cc within the 
instDone function as shown below and rebuilt gem5:

FullO3CPU<Impl>::instDone(ThreadID tid, DynInstPtr &inst)
{
    // Keep an instruction count.
    if (!inst->isMicroop() || inst->isLastMicroop()) {
        thread[tid]->numInst++;
        thread[tid]->numInsts++;
        committedInsts[tid]++;
    }
    thread[tid]->numOp++;
    thread[tid]->numOps++;
    committedOps[tid]++;

    system->totalNumInsts++;
    /*Fulya ’s printf statement*/
    std::cout<<"totalNumInsts="<<system->totalNumInsts<<endl;
    /*End of Fulya’s printf statement*/

    // Check for instruction-count-based events.
    comInstEventQueue[tid]->serviceEvents(thread[tid]->numInst);
    system->instEventQueue.serviceEvents(system->totalNumInsts);
}

It prints the statement when a new instruction is committed until the the cores 
are switched (about 130 instructions). However, after the point where it prints 
out **** REAL SIMULATION ****, it stops printing the instruction count. I am 
suspecting that it actually starts the simulation with O3 cpu, switches right 
after to timing cpu and run the simulation with timing cpu. This does not make 
sense, but could it be the case?
Best,
Fulya


_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to