Why should it be N*4GB? In each process using 4GB of memory? That doesn't seem 
right. I'd guess your issue is that a bad request is being generated somewhere 
in the memory system and making it incredibly large is just covering the 
problem up. 
Ali

On Feb 2, 2013, at 2:57 AM, Mahmood Naderan <mahmood...@gmail.com> wrote:

> It is really annoying. I think mmap is responsible for this behavior.
> 
> On 1/31/13, Mahmood Naderan <mahmood...@gmail.com> wrote:
>> Hi
>> I have found that in a N multicore simulation, defined memory size
>> should be N*4GB. For example, I am simulting eight cores so I have to
>> define 32GB memory. Otherwise I randomly get "unable to find
>> destination ..." error.
>> 
>> However in reality while the simulation is running, "top" command
>> shows that about 1.5GB of memory is used.
>> 
>> Another problem is that in a systeam with 32GB memory installed and
>> 20GB free memory, I can not define a 32GB memory in gem5 simulation
>> script. Otherwise I get "could not mmap" error message.
>> 
>> Is there any better memory management?
>> 
>> 
>> Regards,
>> Mahmood
>> 
> 
> 
> -- 
> Regards,
> Mahmood
> _______________________________________________
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> 

_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to