Hi,
In which CPU mode you are simulating? Is it with detailed?
On 19 May 2016 19:44, "Renju Boben" wrote:
> Hi,
> I am running mcf benchmarks, from SPEC 2006. To verify the progress I
> am dumping the stats after every 1 ticks. Initially it is
> simulating without any issue
protocol.
>
> Andreas
>
> From: biswabandan panda
> Reply-To: gem5 users mailing list
> Date: Monday, 20 April 2015 14:10
> To: gem5 users mailing list
> Subject: Re: [gem5-users] Regarding outstanding request queue at
> coherent_bus.cc
>
> Hi Andreas, I am trans
this and suddenly end up
> with a 128 byte request. To me it sounds like you are creating a next-line
> prefetcher, but in a very painful way.
>
> Am I missing something?
>
> Andreas
>
> From: biswabandan panda
> Reply-To: gem5 users mailing list
> Date: Sunda
Hi,
I am trying an optimization at the memory wherein I service two blocks of
cache simultaneously.
Suppose say block X is requested by LLC, dram_ctrl.cc now services block X
and block X+1, that is I attach both block X and block X+1 to the same
packet (pktX) and send it to LLC.
A case arises wh
Hi,
I am running simulations in SE mode with dram_ctrl.cc (modeling DDR3),
with a single memory controller. As far as I understand, in SE mode the
entire application would be loaded on to the memory and then the simulation
is started (no page faults). I was trying out find out the dram rows (or
m
Hi,
As gem5 uses non-inclusive hierarchy, the 1st point is not applicable.
Your second point makes sense. It is not mandatory to invalidate at the
LLC. The block at the LLC should be invalidated if the hierarchy is an
exclusive one.
What gem5 does is at a given instant of time, It makes sure
Non-determinism comes from the FS simulation.
You could try pinning the software threads to the
hardware threads. The miss rate varies because
of the dynamic behaviour of the synchronization primitives such as barriers
and locks.
On Wed, Sep 10, 2014 at 9:50 PM, Andreas Hansson via gem5-users <
g
dreas
>
> From: biswabandan panda via gem5-users
> Reply-To: biswabandan panda , gem5 users mailing
> list
> Date: Monday, 18 August 2014 18:53
> To: Nizamudheen Ahmed , gem5 users mailing list <
> gem5-users@gem5.org>
> Subject: Re: [gem5-users] Classic memory model
>
You could add a parameter say Level to identify the cache level. There is a
flag named IstopLevel already implemented which distinguishes L1 cache from
the MLCs and LLCs. You could add a similar one to differentiate the caches.
The other way to distinguish the L1 cache is to check the assoc in the
You should use the contextId. It is available throughout the memory
hierarchy.
For writeback requests, the value is not set. If you want the contextId of
the writeback requests also, small changes in the code will work for you.
On Sat, Jul 26, 2014 at 1:16 PM, Debiprasanna Sahoo via gem5-users <
>
> Andreas
>
> From: biswabandan panda via gem5-users
> Reply-To: biswabandan panda , gem5 users mailing
> list
> Date: Tuesday, 22 July 2014 13:25
> To: gem5 users mailing list
> Subject: [gem5-users] Responding multiple packets from the DRAM
>
> Hi,
>
Hi,
I am making changes to the memory system in dram_ctrl.cc . Once a packet
reads a data, I am not responding back immediately - but after I collect
few more packets. I am storing them in a separate queue (the dram_pkt is
also deleted from the respQueue). Once the separate queue becomes full, I
Hi all,
I have been trying to run gem5 with this configuration -
3 level cache
L1: 4 cycle, 32kB 4 way
L2: 5 cycle, 256kB 8 way
L3: 9 cycle, 2MB, 16 way
parsec benchmark: vips - on a 4 core machine (checkpointed and run for 250
million cycles)
I get the following ipc
system.switch_cpus0.ipc_total
Hi All,
For a 2 level cache hierarchy with DDR3 (11-11-11) as the DRAM
specs, the number of prefetch requests issued by the L2 prefetcher is high
(very high in some cases) as compared to the number of prefetch requests
responded by the DRAM controller. Any thoughts on it.
The difference
HI,
Is the targets_per_mshr set for the L2 level ??
On Fri, Jun 20, 2014 at 12:38 PM, yuhang liu via gem5-users <
gem5-users@gem5.org> wrote:
> Dear Sir/Madam,
>
> With O3 CPU model, I test tgts_per_mshr = 1, 2, 4, 8, 16, 32, and found
> that the performance is best when tgts_per_ms
Hi,
That was a bug.
On Fri, Jun 13, 2014 at 6:19 AM, Mostafa via gem5-users wrote:
> Anju M A gmail.com> writes:
>
> >
> >
> >
> > Hello,
> > I have a doubt on a piece of code in esimateLatency() function in
> simple_dram.cc
> > The code snippet is like this :if (bank.openRow == dram_pkt
sim_seconds
On Fri, May 30, 2014 at 7:03 PM, Heba Khdr via gem5-users <
gem5-users@gem5.org> wrote:
> Hi,
>
> how we get the execution time of the application from gem5, is it the
> sim_seconds ? or numCycles ?
>
> Best
>
>
> On Sat, May 24, 2014 at 3:56 PM,
ipc varies based on the input size such as simsmall, medium and large.
Average IPC does not make sense.
Sum of IPC gives the throughput of the system
For parallel benchmarks - execution time is the correct metric to compare
the performance.
On Sat, May 24, 2014 at 3:47 PM, Ravi Verma via gem5-us
Hi,
If I have this scenario,
2 cores, 2 levels of cache
L1(private), L2(shared)
Both the cores have generated a miss for address X.
Core 0 - a read exclusive miss (its a write request)
Core 1 - a read miss
Now L2 MSHR has two targets for address X.
When the first target is popped out, in satisfyCp
If you re fast forwarding it - then you should check the 3rd block of stats.
You should create a checkpoint directory to keep all the chekpoints.
While simulating you could use checkpoint-restore=1 for the latest one
On Wed, Apr 30, 2014 at 12:19 PM, Prasanth Nunna wrote:
> Hi Biswa,
>
> Shoul
Hi,
If you are using checkpoints then you should use the 1st block of
stats (ROI).
On Tue, Apr 29, 2014 at 10:06 PM, Prasanth Nunna wrote:
> Hi everyone,
>
> I am new to gem5. I see that the stats are dumped 2-4 times in the
> stats.txt file. I tried running parsec benchmarks on ALPHA I
classic model has non-inclusive model.
On Mon, Apr 28, 2014 at 4:18 PM, Rodrigo Reynolds Ramírez <
rodrigo.r...@hotmail.com> wrote:
> Hello everyone,
>
> I need to work with different cache's models. I need to test different
> type of organizations: inclusive, exclusive and non - exclusive.
>
>
it does
On Tue, Mar 18, 2014 at 8:32 PM, senni sophiane wrote:
> Thanks Praxal,
>
> Actually, I am using classic memory system. Do you know if it is taken
> into account for classic memory ?
>
> Thank you
>
> Cordialement / Best Regards
>
> SENNI Sophiane
> Ph.D. candidate - Microelectronics
>
Hi Andreas,
We (me and Anju) are working together on it. Till now,
no luck and we migrated into a 32GB machine. We will let you know if
something works.
On Wed, Mar 12, 2014 at 3:57 PM, Andreas Hansson wrote:
> Hi Anju,
>
> What was the outcome on this one? Any luck?
>
> T
example are "in the noise" regardless.
>
> Hope this helps.
>
>
>
> On Mon, Mar 10, 2014 at 7:54 AM, biswabandan panda wrote:
>
>> if i am not wrong, this is FS effect.
>>
>>
>> On Mon, Mar 10, 2014 at 6:20 PM, Praxal Shah wrote:
>>
if i am not wrong, this is FS effect.
On Mon, Mar 10, 2014 at 6:20 PM, Praxal Shah wrote:
> Hi Andreas,
> Thank you for the reply.
> I understand that timing may change as I am changing bank configuration.
> But my question is about *Number of memory Access to the main memory and
> number of in
You should use checkpoints and restore it as per your needs. This will save
your future simulation time and you will get the stats for the ROI.
On Sat, Feb 1, 2014 at 10:03 PM, Hamid wrote:
>
>
> Hello,
>
> I have the same problem too. when ever I use the following code and script
> I
> get 4 d
Why don't you try creating and restoring checkpoints?
On Sat, Dec 14, 2013 at 5:26 PM, Fateme Movafagh
wrote:
> Hi,
>
> I am trying to run Parsec on x86 full system .I have used the proposed
> files of this thread:
>
> http://permalink.gmane.org/gmane.comp.emulators.m5.users/12526
>
> For exampl
you should implement a stream prefetcher with maximum 8 streams and set
your prefetch distance as 8 cache lines
On Mon, Dec 2, 2013 at 10:16 PM, Fernando Endo wrote:
> Hello,
>
> If you're looking for stride prefetchers, they exist: For example in the
> O3_ARM_v7a.py configuration file, in the
013-11-29 01:05:07,"biswabandan panda" wrote:
>
> Are you using simlarge? Are these numbers from the ROI of the application?
>
>
> On Thu, Nov 28, 2013 at 10:22 PM, Yuhang Liu <168liuyuh...@163.com> wrote:
>
>> I run bodytrack (a parsec program) on 16 cpus with 16 thre
Are you using simlarge? Are these numbers from the ROI of the application?
On Thu, Nov 28, 2013 at 10:22 PM, Yuhang Liu <168liuyuh...@163.com> wrote:
> I run bodytrack (a parsec program) on 16 cpus with 16 threads, one thread
> per cpu. cpu0 is much busier than other processors.
>
> system.mem_c
m5.disableAllListeners()
put the above line in your fs.py
On Wed, Nov 27, 2013 at 8:38 PM, יואב אורן wrote:
> Hi,
>
> I'm running dedup from Parsec on a 64 cores architecture (bigtsunami) with
> two levels of cache that the second level is 64MB.
> While running i get this error:
>
>
>
>
>
>
>
Hi,
You could write a update_assoc_size() inside cache or tag folder. I
think it's possible. If i am not wrong you could do it inside cacheset.hh
for assoc.
On Fri, Nov 22, 2013 at 10:26 PM, Rohit Shukla wrote:
> Hi everyone,
>
> I am trying to simulate selective way cache accesses using
,
>>> > assoc=options.l1d_assoc,
>>> > block_size=options.cacheline_size)
>>> > # When connecting the caches, the clock is also inherited
>>> > # from the CPU in qu
t; >
> PageTableWalkerCache())
> > else:
> > system.cpu[i].addPrivateSplitL1Caches(icache, dcache)
> > system.cpu[i].createInterruptController()
> > if options.l2cache:
> > system.l2[i].cpu_side = system.tol2bus[i].mast
Hi,
Could you report the number of committedInsts for both the cases.
On Tue, Nov 5, 2013 at 7:04 AM, fulya wrote:
> In single core case, there is a 1 MB L2 cache. In 4-core case, each core
> has its own private L2 cache of size 1 MB. As they are not shared, i dont
> understand the reaso
@Rodrigo - You could try something like this in your Simulation.py
def benchCheckpoints(options, maxtick, cptdir):
# exit_event = m5.simulate(maxtick - m5.curTick())
# exit_cause = exit_event.getCause()
if options.maxinsts:
count = 0
while count < options.num_cpus
no_mshr corresponds to unavailability of mshr entries (mshr is full). the
processor is blocked till a particular entry is free (after the response
comes which causes deallocation of mshr entry)
On Thu, Aug 8, 2013 at 10:25 AM, Mahmood Naderan wrote:
> Hi,
> I see some stats regarding block cause
@joel - for PARSEC, if i am restoring the benchmarks with the rcS scripts
with checkpoints, does this patch holds good? I tried the --rel-max-tick
but the simulations did cross the specified tick mark. Am i missing
something?
On Fri, Jul 26, 2013 at 11:12 PM, Zheng Wu wrote:
> Hi Joel,
>
> Than
why do you want it?
On Sat, Jul 6, 2013 at 10:59 AM, Bhawna Jain wrote:
> If we want the miss penalties to be same for every read miss and write
> miss, what changes should be made?
>
>
> On Fri, Jul 5, 2013 at 8:44 PM, biswabandan panda wrote:
>
>> Yup. It depends o
rity sampling interval for the monitor itself (and potentially
>> a global stat dump on a fine granularity as well) you can get a lot of
>> insight in the spatial and temporal communication behaviour (bandwidth,
>> latency, inter-transaction-time, address distribution etc).
>>
Correction - You should check recvTimingResp(PacketPtr pkt) and not
handleResponse. In earlier versions of gem5 handleResponse used to be there
instead of recvTimingResp.
On Thu, Jul 4, 2013 at 5:24 PM, biswabandan panda wrote:
> You should look at handleResponse function in cache_impl
You should look at handleResponse function in cache_impl.hh. Based on the
type of the request type (read or write), you could get the penalty.
On Thu, Jul 4, 2013 at 3:48 PM, Bhawna Jain wrote:
> How can we obtain cache miss read and write penalties in gem5? Not average
> but exact miss penalty
Writebacks come with the context-id equals to -1. you could also use
masterId to find out the actual source of the request
On Mon, Jul 1, 2013 at 4:14 AM, Amin Farmahini wrote:
> Hi All,
>
> If a request comes from a processor, the contextId() function can be used
> to identify the processor. B
If you are working on cache mgmt policies, then improvement in the
execution time is the right metric and not the ipc.
On Tue, Jun 25, 2013 at 6:13 AM, Rodrigo Reynolds Ramírez <
rodrigo.r...@hotmail.com> wrote:
> Thanks Joel, I looked that for comparing the use of each cpu, it seems ok.
> Becau
No that's not the case always. Did you run for the ROI?
On Mon, Jun 24, 2013 at 9:35 PM, Rodrigo Reynolds Ramírez <
rodrigo.r...@hotmail.com> wrote:
> I think the OS scheduler tries to balance the cpu's load avoiding that ipc
> difference.
>
> Rodrigo
>
> --
> Date: M
why would the ipc be same?
On Mon, Jun 24, 2013 at 7:33 PM, Rodrigo Reynolds Ramírez <
rodrigo.r...@hotmail.com> wrote:
> Hello everyone
>
> I am currently working with parsec+alpha, I have executed some tests using
> the small inputs using two cpus, is it normal to get a very unbalanced ipc?
>
@Roberto - you could use the ContextId to distinguish the writebacks from
the demand misses. Writebacks have contextId -1.
@Steve - gem5 hierarchy is non-inclusive but the explanation you gave was
for exclusive cache hierarchy. In a non-inclusive cache demand responses go
through all the levels of
able: 1 dirty: 1 tag: 31 data: 0
> 1016000: system.cpu0.dcache: WriteResp addr: c5fc0 data : ac size 8
>
> Ali
>
>
> On Jun 16, 2013, at 9:54 PM, biswabandan panda
> wrote:
>
> Hi Ali,
> I did understand the flow from other caches to L1 and why the data needs
> to be k
the case of a
> hit the data is written into the line, in the case of a miss the line is
> requested from another cache or the memory system with exclusive access
> permission and then the write completes.
>
> Thanks,
> Ali
>
>
> On Jun 16, 2013, at 12:15 PM, biswabandan p
, 2013 at 9:33 PM, Ali Saidi wrote:
> Are you looking at a request or a response. The request very well could
> not have any data associated with it, but the response will.
>
> Ali
>
> On Jun 14, 2013, at 1:00 AM, biswabandan panda
> wrote:
>
> Hi,
> In an unmod
o I
track(print) it?
Thank you once again for your time.
On Fri, Jun 14, 2013 at 11:14 AM, Ali Saidi wrote:
> Actual data is stored in the caches.
>
> Ali
>
> On Jun 13, 2013, at 11:49 PM, biswabandan panda
> wrote:
>
>
> Hi,
> I have modified the access function
ank you once again for your time.
On Fri, Jun 14, 2013 at 11:14 AM, Ali Saidi wrote:
> Actual data is stored in the caches.
>
> Ali
>
> On Jun 13, 2013, at 11:49 PM, biswabandan panda
> wrote:
>
>
> Hi,
> I have modified the access function in cache to se
Hi,
I have modified the access function in cache to service a miss as hit
even when a particular block is not found. (This is just to collect some
statistics). I initially had assumed that gem5 does not store actual data
in cache and hence if a request misses at L1, if I just do "insertBlock,
mem
n
>
>
> On Thu, Jun 6, 2013 at 7:26 PM, biswabandan panda wrote:
>
>> you could make l2 private by using addtwolevelhierarchy function which
>> creates private Icache, Dcache and a private L2 cache.
>>
>>
>> On Thu, Jun 6, 2013 at 9:35 PM, יואב אורן wrote:
you could make l2 private by using addtwolevelhierarchy function which
creates private Icache, Dcache and a private L2 cache.
On Thu, Jun 6, 2013 at 9:35 PM, יואב אורן wrote:
> and it is not possible to make l2 private as well?
>
>
> On Thu, Jun 6, 2013 at 7:03 PM, Andrws Vieira wrote:
>
>> I t
non-inclusive
On Sat, May 11, 2013 at 7:52 PM, Xiangyang Guo wrote:
> Hi, Dear Gem5 users,
>
> I have a quick question about the classic memory system, if we use 2 level
> or 3 level caches, so it is inclusive cache or exclusive cache?
>
> I checked the code but did not find the it. any hint is
gem5 puts the prefetched block into the cache
On Mon, May 6, 2013 at 7:02 AM, Xiangyang Guo wrote:
> Hi, Gem5 user,
>
> Can anyone tell me if Gem5 support streaming buffer? I mean, when we use
> the prefetcher, the prefetched block will be store in the cache ? or it
> will be stored in the stre
I think present setup does not support multiple CMPs. You could try
something like cluster.py which is in the directory named splash but not
sure how trivial it is.
On Sun, Apr 21, 2013 at 9:44 PM, atish patra wrote:
> Hi All,
> I was wondering whether multiple CMP is enabled?? I went through g
Hi all,
Is the prefetching module written within ruby is stable with
the present ruby setup? I found a seg fault when i tried running ruby with
enable_prefetch = True for MESI_CMP_directory.py.
*command:*
./build/ALPHA/gem5.opt configs/example/ruby_fs.py -n 4 --l1i_size=32kB
--l1d_si
you can change it in se.py and fs.py
On Tue, Nov 27, 2012 at 11:37 PM, Mahmood Naderan wrote:
> Hi
> How can I increase the size of a SimpleDRAM memory?
> in SimpleDRAM.py, I see only some timing parameters.
>
> --
> Regards,
> Mahmood
> ___
> gem5-user
set the path correctly in SysPaths.py
On Sun, Sep 9, 2012 at 11:44 PM, Munawira Kotyad wrote:
> Hi,
>
> I'm just getting started with gem5 and having trouble running the full
> system simulations as shown. How do I get rid of this?
>
> ece% ./build/ARM/gem5.opt configs/example/fs.py
> gem5 Simul
ContextId
On Mon, Sep 3, 2012 at 12:52 AM, Mahmood Naderan wrote:
> Hi
> I can not a cpuid member in PacketPtr. Is that normal? Or maybe there
> is another name for this?
> Consider a packet arrvies in cache_impl.hh and we want to know which
> cpu (or better stated, benchmark) send this packet. H
ot;.
>
> Or is there another way for system with greater number of cores?
>
> Can I apply this method for SPLASH-2 benchmark as well?
>
> Thanks,
> Shervin
>
> --- On *Tue, 7/24/12, gem5-users-requ...@gem5.org <
> gem5-users-requ...@gem5.org>* wrote:
>
>
export GOMP_CPU_AFFINITY="0 1 2 3"
put this if u r using 4 core system with 4 threads. it ll pin the threads
On Tue, Jul 24, 2012 at 7:35 AM, shervin hajiamini
wrote:
> Hi all,
>
>
>
> I am running PARSEC suit (canneal) on 4 cores (O3). The workload is NOT
> equally distributed among the cores.
you give me any hint for satisfying this purpose ?
>
> Best Regards
> Wael AMR
>
>
> On Sun, Jul 22, 2012 at 3:04 PM, biswabandan panda wrote:
>
>> possible
>>
>> On Sun, Jul 22, 2012 at 6:29 PM, wael Amr wrote:
>>
>>> Hello,
>>>
&
possible
On Sun, Jul 22, 2012 at 6:29 PM, wael Amr wrote:
> Hello,
>
> I need to use gem5 to:
>
>- run a number of applications,where each application, alone, on a
>processor simulator.
>
> My target is to capture all the memory accesses that each application
> performs. These accesses a
Hi all,
I read through this link
http://www.mail-archive.com/gem5-users@m5sim.org/msg03747.html
I just want to know the present status of this problem.
--
*thanks®ards
*
*BISWABANDAN*
*Any intelligent fool can make things bigger, more complex, and more
vi
gt; total number of canneal is less than x264.
>>
>> On 6/25/12, Anusha wrote:
>> > No, it did not create any checkpoint.
>> >
>> > -Anusha
>> >
>> > On Mon, Jun 25, 2012 at 10:46 AM, biswabandan panda
>> > wrote:
>> >
>&
I was able to create checkpoints for all the PARSEC benchmarks.
Could u share the command line u were using?
On Mon, Jun 25, 2012 at 9:19 PM, Anusha wrote:
> No, it did not create any checkpoint.
>
> -Anusha
>
>
> On Mon, Jun 25, 2012 at 10:46 AM, biswabandan panda
>
I guess, There is nothing wrong with it. It's fine. Are u able to create
checkpoints?
On Mon, Jun 25, 2012 at 9:11 PM, Anusha wrote:
> When I try to create checkpoint for canneal I get the following message
>
> Just saw element: 10
> netlist created. 10 elements.
> sh: /m5/bin/m5: No su
incMissCount(pkt);
check in cache_impl.hh
On Wed, Jun 6, 2012 at 2:31 PM, Mahmood Naderan wrote:
> Hi
> I can not find where in the code, the demand misses are increased. I
> expect that in cache_impl.hh::timingAccess() where an access is made
> to the cache, this stat increments, But it doesn'
This number corresponds to the number of misses at the MSHR from the
prefetcher side.
MSHR is miss status handling registers - which keeps track of all
outstanding miss requests irrespective of demand or prefetcher.
On Mon, Jun 4, 2012 at 3:40 PM, Mahmood Naderan wrote:
> I also can not figure ou
what about invalid lines?
On Sat, May 26, 2012 at 10:38 PM, Mahmood Naderan wrote:
> >What if the dish can actually hold 20 fruits in total, but only 2 apples
> >and 8 oranges are present? What would be the occupancy?
> Ok. In that case,
> occ_percent::total =100% = totalocc_percent::apples +
> o
Hi all,
I have a doubt regarding master-ids. As master-ids are
associated with the requests, i tried to check the master-ids of
demand requests and prefetch requests at the L2 which are coming from
L1. As per the code in the stride prefetcher file, the table is
created for each master. I
for i in xrange(options.num_cpus):
if options.caches:
if options.cpu_type == "arm_detailed":
icache = O3_ARM_v7a_ICache(size = options.l1i_size,
assoc = options.l1i_assoc,
block_size=op
ry(range=AddrRange("512MB")),
> > membus = Bus(), mem_mode = test_mem_mode)
> >
> > for i in xrange(np):
> > system.cpu[i].workload = multiprocesses[i]
> >
> > if options.fastmem:
> > system.cpu[0].physmem_port =
ion available is
> > http://www.mail-archive.com/gem5-dev@gem5.org/msg03370.html
> >
> > seems that no one has successfully configured yet.
> >
> > On 3/28/12, biswabandan panda wrote:
> >> Hi all,
> >> how to configure the prefet
Mahmood Naderan wrote:
> > the only information available is
> > http://www.mail-archive.com/gem5-dev@gem5.org/msg03370.html
> >
> > seems that no one has successfully configured yet.
> >
> > On 3/28/12, biswabandan panda wrote:
> >> Hi all,
>
Hi all,
how to configure the prefetcher in the latest gem5 version?
--
*thanks®ards
*
*BISWABANDAN*
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
On Wed, Mar 7, 2012 at 12:47 AM, Heba Saadeldeen wrote:
> Hi,
>
> I am trying to run Parsec with checkpoints. I followed the report from
> utexas. However when I run it, it exits on switching cpu:
>
> command line: ./build/ALPHA/gem5.opt --stats-file=blackscholes2.txt
> ./configs/example/fs.py
>
thats not true. You can fiind prefetched blocks at L2 also.
On Fri, Feb 10, 2012 at 1:13 PM, Mahmood Naderan wrote:
> Dear all,
> Assume the prefetcher is enabled for L1 and L2. When L1 issue 'X', it
> checks L2 for that address. Also assume that L2 misses. MM replies to
> L2 with a BLK containin
what is the block size and associativity?
On Wed, Feb 8, 2012 at 11:45 PM, Mahmood Naderan wrote:
> Hi,
> For some debugging purposes, I want to set the cache size to below
> 1kB. However when I set to 512B, it says:
>
> fatal: # of sets must be non-zero and a power of 2
>
> --
> // Naderan *Mahm
What's the degree of the prefetcher? I have found out the number like this
for degree-4
On Sat, Feb 4, 2012 at 12:30 PM, Mahmood Naderan wrote:
> hi
> While simulating a workload using X86_SE, I noticed that L2 misses are
> large but prefetch issue rate is very very small.
>
> system.l2.prefetche
(1) Use gem5-dev instead of stable
(2) check the m5out/stats.txt - r u getting zeros there?
On Fri, Jan 27, 2012 at 8:04 PM, Sangamesh K wrote:
> Hi All,
>
> I am gem5 beginner and I am trying to run splash benchmarks in ALPHA_SE
> for an in-order CPU. The results obtained were all zeros.
>
> I
Hi all,
Is there a process to dump the ruby stats as similar to
m5stats? If not any other alternative that i can try.
On 1/25/12, Madhavan manivannan wrote:
> Hi,
>
> The stats start to diverge after the simulator encounters m5_reset op and
> this is because the global variable to sig
Thanks Nilay
On Wed, Jan 25, 2012 at 2:49 AM, Nilay Vaish wrote:
> You will need to mention the following line after both ruby and system
> objects have been created in ruby_fs.py.
>
> system.system_port = system.ruby._sys_port_proxy.**port
>
> --
> Nilay
>
>
> On
It's non-inclusive
On 1/23/12, Mahmood Naderan wrote:
> seems that asked the question in a hard way.
>
> In another word, for the classic memory model,I want to know:
> 1- if the cache is exclusive or inclusive?
> 2- If yes, how can I modify that in config files?
>
> a simple grep finds both "exc
Hi all,
Is prefetching is part of latest gem5 with ruby module?
This thread says GEMS is planning to put a stream based prefetcher for ruby
module?
https://www-auth.cs.wisc.edu/lists/gems-users/2011-July/msg00043.shtml
Any update as far as gem5 is concerned?
--
*thanks®ards
*
*
Hi all,
I tried to run 64 cores with the following command :
* command line: build/ALPHA_FS/gem5.fast
configs/example/ruby_fs.py -n 64 --num-dirs=64 --l1i_size=32kB
--l1d_size=32kB --l2_size=8MB --num-l2cache=8 --topology=Mesh --mesh-rows=8*
Outout:
warning: overwriting port .tsunami.i
writeback requests have contextId=-1, as no cp is associated with it.
On Fri, Jan 6, 2012 at 5:22 PM, Mahmood Naderan wrote:
> Sorry I didn't understand
> What does ContextId=-1 mean?
>
> On 1/6/12, biswabandan panda wrote:
> > request because of writebacks
> >
&g
} else {
> occupancies[cache->numCpus()]++;
> }
>
> Do we have ContextId=-1 ?? what does that mean then?
>
>
> On 1/6/12, biswabandan panda wrote:
> > yes
> >
> > On Fri, Jan 6, 2012 at 12:03 PM, Mahmood Naderan
> > wrote:
> >
> >
yes
On Fri, Jan 6, 2012 at 12:03 PM, Mahmood Naderan wrote:
> Hi,
> Is ContextId == Core_id while simulating a multiprogram workload?
> 'n' cpus are defined and 'n' workload exist. Each workload is bound to one
> core.
>
> regards
> --
> // Naderan *Mahmood;
>
LRU is not at all related to totalrefs. LRU tracks the least recently and
least frequently for which u need the total reference. Gem5 correctly
models the LRU. It tracks most recently used and moving it to head of the
block. For LRU the total refs doen't matter.
One thing that gem5 doesnot as far
blackscholes :
simsmall - Full simulation in a 4 core m/c (2hr 10 mins)
Only ROI (within 1hr 30 mins)
On Tue, Dec 20, 2011 at 6:26 PM, Heiner Litz wrote:
> Hi,
>
> I have the following runtimes for blackscholes (4GHz core2):
>
> ./build/ALPHA_FS/m5.fast -d ./parsec/blackscholes/
increase the memory size
On Thu, Dec 15, 2011 at 12:01 PM, Ankita (Garg) Goel
wrote:
> Hi,
>
> I just setup gem5. When running the sample hello test program, I get the
> following error:
>
> command line: build/ALPHA_SE/gem5.opt configs/example/se.py -c
> tests/test-progs/hello/bin/alpha/linux/he
yup
On Sat, Dec 10, 2011 at 2:30 PM, Mahmood Naderan wrote:
> isTopLevel is enough for 2 level cache
> thanks
>
> On 12/10/11, biswabandan panda wrote:
> > try isTopLevel or put your own flags to distinguish between different
> > levels in the python files
> >
&g
try isTopLevel or put your own flags to distinguish between different
levels in the python files
On Sat, Dec 10, 2011 at 2:07 PM, Mahmood Naderan wrote:
> hi,
> suppose i am debugging cache_impl.hh. As an example, this function
> bool Cache::CpuSidePort::recvTiming(PacketPtr pkt)
>
> How can I fi
a single program that's
> internally multithreaded, total running time is best. In neither case is
> unweighted IPC (or sums or averages of unweighted IPCs) a useful metric.
>
> Geometric vs. harmonic vs. arithmetic means is an orthogonal issue.
>
> Steve
>
>
> On Tue,
> Note that total IPC is a really lousy metric for multithreaded system
> > performance. You really need to weight the thread IPCs by their "native"
> > IPCs in a single-threaded environment to get a meaningful
> speedup/slowdown
> > metric.
> >
> >
1 - 100 of 142 matches
Mail list logo