Thnx Andreas
Yup, system is running at GHz.
It explains the sim tick counts.
Mann
On Tue, Sep 25, 2012 at 4:02 AM, Andreas Hansson wrote:
> Hi Mann,
>
> I think you might be confusing accuracy and precision here :). No one, as
> far as I know, has tried to make the gem5 models cycle accurate…but
Hi,
I want to simulate a multi-threaded program on a multicore supporting
SMT. I used m5threads to make some PARSEC benchmarks run in SE mode.
They run fine if I don't use SMT (i.e., one thread per core).
To enable a multi-threaded program to run on multiple cores, the
workload of each core
You can use *gcc-arm-linux-gnueabi* in Ubuntu.
On Tue, Sep 25, 2012 at 6:28 PM, Musharaf Hussain wrote:
> Please anyone
>
> can explain about Codebench Courcery installtion and use with gem5 and
> benchmarks.
> I want to use it for ARM processor. I am at Ubuntu 12.04 desktop
> workstation 8.0 V
I'm guessing the value 4 is at one of the corner routers where there is perhaps
one L1, one Dir, and 2 network ports.
From a router's perspective, all input/output ports are the same. If you want
to know which ones are connected to the directory, I think you will need to set
some variables in t
Hi Tejasi,
I tried it too and yeah it fails, not sure why...
The same code works in src/mem/ruby/system/RubyMemoryControl.cc
A cast into NetworkMessage has been done in places like RoutingUnit_d but that
won't give you the message type.
You'll have to dig in and see why the cast fails…
- Tushar
On Mon, 24 Sep 2012, z...@uwaterloo.ca wrote:
Hi Nilay,
Okay, if forcing the write back is not yet implemented. Then I want to at
least know the number of dirty lines in cache at the end of simulation. How
would I get this stat/parameter? Can you point me to the right source file if
I need t
On Tue, 25 Sep 2012, Jun Pang wrote:
Hi Nilay,
Sorry that I forgot to ask a question. I have tried to use the default
clock rate for cpus (2GHz) and it boots successfully. I wonder if it is
possible to have cpus with 5GHz clock rates. If so, what's the correct way
to make it work. Is there a l
If the Message* is not a MemoryMsg*, then safe cast will fail.
--
Nilay
On Wed, 26 Sep 2012, Tushar Krishna wrote:
Hi Tejasi,
I tried it too and yeah it fails, not sure why...
The same code works in src/mem/ruby/system/RubyMemoryControl.cc
A cast into NetworkMessage has been done in places li
Hi,
I would like to run multiple programs on an inorder core which supports SMT.
For this I needed to replace MaxThreads = 1 by (e.g.) MaxThreads = 2 in
src/cpu/inorder/pipeline_traits.hh.
However, running the simulator with smt enabled for an inorder core
results into a Segmentation fault.
Do
Hi Stijn,
This is a confusing case. Based on what you're describing (and thanks for
the thorough description btw!), my hypothesis of what's happening is that
you are successfully creating a single workload SimObject, but that that
single object is getting added to the list of the CPU object's chi
Hi Max,
Did you recompile the model after changing MaxThreads?
In terms of debugging,
Can you locate the code that populates the fetchprioritylist? Can you
double check that a change in MaxThreads is setting up that list correctly?
-Korey
On Wed, Sep 26, 2012 at 7:51 AM, Maximilien Breughe <
max
Ali Saidi writes:
> Hi Lluis,
>
> I just tried it and it appears to be working. Could you check again and see if
> you stil have the problem?
Looks like it's some kind of weird problem on my machine. I tried to register
from some other machine and it works perfectly.
Sorry for the noise.
Th
I see. Thanks!
Jun
On Wed, Sep 26, 2012 at 10:38 AM, Nilay Vaish wrote:
> On Tue, 25 Sep 2012, Jun Pang wrote:
>
> Hi Nilay,
>>
>> Sorry that I forgot to ask a question. I have tried to use the default
>> clock rate for cpus (2GHz) and it boots successfully. I wonder if it is
>> possible to h
Background:
I have a non-o3, out of order CPU implemented on gem5. Since I don't have
a checker implemented yet, I tend to diff committed instructions vs o3.
Yesterday's patches caused a few of these diffs change because of
load-linked/store-conditional behavior (better prediction on data ops tha
Hi Mitch,
I wonder if this happens in the steady state? With the
implementation the store-set predictor should predict that the store is
going to conflict the load and order them. Perhaps that isn't getting
trained correctly with LLSC ops. You really don't want to mark the ops
as serializing a
With which ISAs and CPU models does SMT work?
I've tried running different configs and it appears that only "ALPHA +
detailed" works.
I chased down a panic when I attempt to use "ALPHA+inorder" and it says I need
to increase 'MaxThreads'.
I looked at the "inorder" cpu code and notice that
Hi Michael,
I'm not sure how to quantify "reliably" but with the lack of extensive
regressions for SMT on various CPU models all I can say that it *should*
work with configuration/parameters changes in SE mode. Over the years,
different people have varied issue width as well as number of threads so
Hi Nilay and Tushar,
Thanks for your response. I thought so earlier but I tried running pure
memory test (e.g. ruby_mem_test.py) as well as ruby_random_test. Shouldn't
the memory test inject only memory request packets in the network? In that
case why should the type cast fail? Please correct me
Thanks for the reply.
Thinking about this... I don't know too much about the O3 store-set
predictor, but it would seem that load-linked instructions should care
about the entire cache line, not just if the store happens to overlap.
Since, it looks like the pending stores write to the address rang
This is a pretty interesting issue. I'm not sure how it would be handled
in practice. Since the loads and stores in question are not to the same
address, in theory at least store set predictor should not be involved. My
guess is that the most straightforward fix would be to record the actual
ran
Hi Steve,
Your workaround solved the issue. Thanks for the quick and easy solution!
Stijn
Op 26/09/2012 17:08, Steve Reinhardt schreef:
Hi Stijn,
This is a confusing case. Based on what you're describing (and thanks
for the thorough description btw!), my hypothesis of what's happening
is t
Hmm, I had normally thought that LL/SC were handled with special address
range registers @ the cache controller. Since a core should really only
have one outstanding LL/SC pair, a register per core would suffice and
exactly encode the range. Basically doing the same thing that your
more-fine grai
That's a reasonable hardware implementation. Actually you need a register
per hardware thread context, not just per core.
Our software implementation is intended to model such a hardware
implementation, but the actual software is different for a couple of
reasons. The main one is that we don't w
Hello,
I'm just starting to try out gem5, and a I ran into an error today using
the gem5 --list-sim-objects argument. This is from week old gem5 checkout.
david@david-ThinkPad-T410:~/gem5$ build/ARM/gem5.opt --list-sim-objects
...
ArmTLB
size
default: 64
desc: TL
Hi all,
I have run Canneal benchmark on
gem5 with 64 cores. For windows of a fixed size (e.g. 100 CPU cycles per
window) I want to
calculate the power consumption for EACH window using McPAT tool. For doing
this,
gem5 has to generate stats.txt for each window which is then inputted to McPAT
25 matches
Mail list logo