Hello All,
I'm trying to implement a simulation memory object that accepts atomic requests on its CPUSide port (response port) and forwards them to the memory controller.
This simulation object is connected to cache from the CPUSidePort and to Memory controller from memSidePort.
It works fi
Hi All
I'm trying to understand how the emulated page table works in GEM5. I'm using 03 CPU model.
1) How to control number of cycles needed to redo TLB translation again if it fails?
Is there any schedule for any of the following translation events:
i.e. EmulationPageTable::translate
Hi all
What does it make a Packet's "VALID_ADDR" flag being cleared , even it was set before?
I'm sending some packets from the cache to unblocking memory object that I created Inside this unblocking memory object, I crated a queue to store packets that fail to be sent
Aft
g) , it will cause a segmentation fault!
Thank you.
regards,
Abdelrhman
From: Abdlerhman Abotaleb
Sent: Wednesday, July 13, 2022 11:52 AM
To: Balazs Gerofi via gem5-users
Subject: [gem5-users] Packet VALID_ADDR being cleared when try to resend it !
rs mailing list
Subject: [gem5-users] Re: Packet VALID_ADDR being cleared when try to resend it !
On 7/13/2022 5:14 PM, Abdlerhman Abotaleb wrote:
> After through debugging
> It appears that the problem happens at the following call:
> Inside "CoherentXBar::recvTim
On 7/14/2022 2:46 PM, Abdlerhman Abotaleb wrote:
> Thank you, Eliot, for your reply. 🙏
> I solved it, but still need to understand the cause. *
> *
> *This is the full story: *
> I defined an STL queue of pointers to Packets to store the packets for further resending.
> st
Moss
Sent: Thursday, July 14, 2022 8:48 PM
To: The gem5 Users mailing list
Subject: [gem5-users] Re: Packet VALID_ADDR being cleared when try to resend it !
On 7/14/2022 8:07 PM, Abdlerhman Abotaleb wrote:
> Thanks Eliot for the follow up.
> The reposne is created independently using the m
I'm trying to send many memory requests with short time separation
The following error happened
"
gem5.debug: build/RISCV/mem/xbar.cc:199:
bool gem5::BaseXBar::Layer
gem5::RequestPort>::tryTiming(SrcType *)
[SrcType = gem5::ResponsePort, DstType = gem5::RequestPort]:
Asser
I want to speed up GEM5 simulation via simulating different parts of the benchmark using different simulation models (i.e. atomic - functional - timing).
I used m5ops to annotate different parts of the code to do useful effects like reset GEM5 stats.
Can I use it - or use any different appr
I want to do a fast forwarding.
i.e. run most of the source with AtomicSimpleCPU till encounter "m5_switch_cpu" then use Minor CPU.
I used the following options when running GEM5 binary:
"--cpu-type MinorCPU --fast-forward=100"
I got a segmentation fault, and when debuggi
SCONS Produce debug symbols for GEM5 except for shared libraries.
i.e. If I integrate any module in ext folder with GEM5 it is compiled
and linked as a shared library without debug symbols being added.
What is the modification should I make to sconstruct or SConscript
to enable shared lib
How to use different physical addresses for two programs run on
two processors?
I can find that using "malloc" allocates the same addresses
Also, I'm using the same binary on the two processors.
Thanks.
___
gem5-users mailing list -- gem5-use
How to enforce the TLB in the system emulated (SE) mode to always produce
Page Frame Number = Virtual Page Number.
I may want to try two things:
Don't disable the TLB but have PFN=VPNDisable TLB , so addresses in the workload are physical addresses.
Thanks, a lot.
___
I want to identify the source CPU# of a packet.
I found a field called "pkt->requestorId()"
This field originally can have the following options: "Source: gem5/src/mem/request.hh"
wbRequestorId = 0, /* writeback requests by the caches */
funcRequestorId = 1, /* functional
I have a program bin.riscv that is running on 4 prcoessors simulatenously.
Inside riscv.bin:
// some code
m5_checkpoint(0,0);
m5_reset_stats(0,0);
// some code
I'm running GEM5 using the following commands:
First
gem5.opt ./configs/example/se.py -n 4 --caches --max-checkpoints
How reset stats work in case of multicore ?
I run multicore experiment with 8 cores running the same binary.
The binary has m5_reset_stats(0,0); before code of interest which is a loop that runs huge number of iterations (for example 100k ~ access 781kB of data ,in case of double type it
An interesting thing that I found when analyzing GEM5 output, is that core 7 hits the reset stats after core 0 finishes, this happens in very special scenario if scheduling favorites a core over another, so now this makes sense.
From: Abdlerhman Abotaleb via gem5-users
Sent: Saturday
How can I share a variable between multicores in GEM5. (I'm simulating RISCV- Cores)
I can see that each core allocates different VPN to PFN translation.
So even if I explicitly assign a memory address to a variable (i.e. char*arr = 0x20010 then dereference it later) it will be in different
Regarding the following cache memory latency parameters:
Tag latency
Data latencyResponse latency
The default values for "L2 Cache" are 20 cycles per each.
Are those values seem to be practical?
What is the total latency for an access that hits in L2 Cache?
(L1 Miss + L2 Hit)
Should I
19 matches
Mail list logo