The PCEventQueue in src/cpu/pc_event.hh allows you to trigger events on the
execution of particular PCs. The difficulty of course is mapping source
code to PCs. You would also not be able to pass any data to the event.
Using an m5op would incur some small overhead for fetching & executing the
op
Following up on Jason's note to the gem5-dev list, I wanted to share a
brief post I wrote for the SIGARCH blog:
https://www.sigarch.org/remembering-nathan-binkert
I think it's safe to say that gem5 would not exist if it were not for
Nate. About
15 years ago, when he was my PhD student at Michigan,
I agree with Alex: the ISA description system was designed for Alpha, and
it remains the purest example of how it was intended to be used, so I think
there's some value in keeping it around for that.
To me, it should boil down to a cost/benefit consideration. I agree that
the benefits are not that
es is
> "setIntRegOperand," which takes indices into _destRegIdx rather than
> register indices.
>
> On Mon, Aug 1, 2016 at 10:58 AM, Steve Reinhardt wrote:
>
>> You don't need to worry about the size of the bitfield in the instruction
>> encoding, because the tem
the
> minor CPU model problem I described before.
>
> No, most of the ISA is not microcoded. In fact, as I said, these RMW
> instructions are not specified to be microcoded by the ISA, but since they
> each have two memory transactions they didn't appear to work unless I split
>
. With this code, it works with
> minor model, but the final calculated value in the modify-write micro-op
> never gets written at the end of the instruction in the O3 model.
>
>
> On Fri, Jul 29, 2016 at 2:50 PM, Steve Reinhardt wrote:
>
>> I'm still confused about
I'm still confused about the problems you're having. Stores should never
be executed speculatively in O3, even without the non-speculative flag.
Also, assuming the store micro-op reads a register that is written by the
load micro-op, then that true data dependence through the intermediate
register
There are really two issues here, I think:
1. Managing the ordering of the two micro-ops in the pipeline, which seems
to be the issue you're facing.
2. Providing atomicity when you have multiple cores.
I'm surprised you're having problems with #1, because that's the easy part.
I'd assume that you
If not, we are likely to drop it... so speak up now if you care!
Thanks,
Steve
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Yes. The constraints I mentioned on locked RMWs (and associated patch)
only apply to classic caches; ruby already supports locked RMWs in all
cases.
Steve
On Tue, Apr 19, 2016 at 3:22 PM Tanmay Gangwani wrote:
> Thanks. And do both ruby and classical memory subsystem support this?
> __
Both cmpxchg8b and cmpxchg16b are implemented, see:
http://grok.gem5.org/xref/gem5/src/arch/x86/isa/insts/general_purpose/semaphores.py#130
This code does rely on the ability to do locked RMWs in the cache, which
works fine out of the box for the AtomicSimple and TimingSimple CPUs, but
requires th
Sure. The example config scripts don't instantiate standard I/O devices
like hard drives in SE mode because they're not needed, but you can still
do it. The recently added GPU model is a good example of something that
has some device-like characteristics but is used in SE mode. There are a
couple
Thanks!! I really appreciate you posting these directions to the mailing
list.
If you don't mind, it would be even more helpful if we integrated the
necessary changes into the code repository (assuming they don't cause any
problems in other situations) and added the necessary instructions (and th
>>
>>
>>
>> See, that's why when the returned resp packet arrives at L1, its
>> address(pkt->getAddr()) can't be equal to the target's packet's address
>> (
I 0 0 0
>>
>> For a shared block, according to the explanation of wikipedia, they can
>> be "dirty" (Here the 'dirty" is with respect to memory), We probably
>> have several modified copies. But gem5 think they are a
Upgrade requests are used on a write to a shared copy, to upgrade that
copy's state from shared (read-only) to writable. They're generally treated
as invalidations.
A write hit implies that a cache has an exclusive copy, so it knows that
there's no need to send invalidations to lower levels. Ther
>
>> const Addr PageShift = 13;
>>
>
> Is this the correct place to look into ?
>
>
> On Sat, Feb 6, 2016 at 11:54 PM, Steve Reinhardt wrote:
>
>> There was an effort several years ago to reorganize the documentation on
>> the wiki which led to creatin
There was an effort several years ago to reorganize the documentation on
the wiki which led to creating an outline, but then the effort stalled and
parts of the outline were not filled in. I expect that's what happened
here.
If you have specific questions about address translation, feel free to a
Try it with this patch: http://reviews.gem5.org/r/2691
You may also need 3290 & 3291, if the code uses cmpxchg.
Steve
On Thu, Jan 28, 2016 at 6:47 AM Timothy Chong wrote:
> Hello all,
>
> I’m trying to run simple timing FS simulation with parsec x86 with 16
> cores. My simulation gets stuck b
your proposal to have separate
store-address and store-data micro-ops though. I'd have to look more
closely at the code, and unfortunately I don't have time to do that right
now.
Regards,
Steve
On Mon, Nov 2, 2015 at 4:12 AM Virendra Kumar Pathak <
kumarvir.pat...@gmail.com> wrote:
Hi Jyothish,
Can you elaborate on what you're trying to do? How do you end up with a
thread context that has no process pointer? When does that thread context
get used?
Thanks,
Steve
On Fri, Sep 18, 2015 at 3:30 AM Jyothish Soman
wrote:
> Just in case anyone faces this later, this made it w
Which pointer is null? Are you sure it's a memory instruction, and that
all the dependencies are satisfied?
On Mon, Aug 31, 2015 at 11:42 AM Abhishek Rajgadia
wrote:
> Dear All,
> I am trying to compute effective address by calling calcEA() given in
> src/cpu/o3/dyn_inst.hh . But when i call t
ctly using gdb in SE mode, and I want to dump stats and create
> checkpoints at breakpoints. However, this doesn't work and gem5 panics and
> reports: page table fault when accessing virtual address 0... I don't
> know if I can call function in remote debugging session. Thanks.
&
You can call functions from gdb, but most functions are not designed to be
called from gdb. What arguments does the function expect, and what
arguments are you providing?
On Mon, Aug 24, 2015 at 8:13 AM Lingxiao Jia
wrote:
> Hi all,
>
> Can anyone help with this issue? I am still not finding ho
epeated runs.
> So as andreas mentioned this must be introduced by this specific benchmark.
>
> Thanks Andreas and Steve.
>
> On Thu, Aug 13, 2015 at 1:22 PM, Steve Reinhardt wrote:
>
>> Even with x86 you should be seeing deterministic results. If you are
>> regularly
Even with x86 you should be seeing deterministic results. If you are
regularly seeing inconsistencies, you can try running two copies with debug
tracing (I suggest Exec,ExecMacro,Cache as a starting set of flags) and
comparing their output with util/tracdiff to see where they diverge.
Steve
On T
nks, Steve. I know the pseudo-instruction stuff, but I do want to
> generate checkpoint at certain PC.
>
> On Thu, Aug 6, 2015 at 3:38 AM, Steve Reinhardt wrote:
>
>> If you're just trying to generate a checkpoint at a particular point in a
>> program, you can also insert
If you're just trying to generate a checkpoint at a particular point in a
program, you can also insert a pseudo-instruction in the program to cause a
checkpoint to be generated without having to figure out the PC value.
Steve
On Wed, Aug 5, 2015 at 11:34 AM Lingxiao Jia
wrote:
> Thanks, Patric
You are right that each instruction execution is a separate call. There is
no binary translation being done.
Steve
On Mon, Jul 13, 2015 at 11:45 PM Abhishek Joshi wrote:
> Hi,
> Can anyone please tell if binary translation is implemented in gem5? I
> have looked through the code and I am aware
Sorry, I had forgotten that getting the multi-queue simulation to run
requires an additional patch that's (1) not committed and (2) currently
only works for x86. See http://reviews.gem5.org/r/2320.
So I guess the question of whether pd-gem5 works with multithreading is
something we can just ponder
or this, which
> I'm not very sure how to do it. Is it possible if you can provide me with
> some examples? I'm really sorry about all these basic questions, I'm still
> very new to gem5.
>
> Thank you very much.
> Best,
> Cao
>
>
>
> On Jun 28, 2015, a
ication among the entire
> system, I was planning to add an USB drive and monitor its communication,
> do you think it's possible to implement it?
> I really appreciate your help.
>
> Best,
> Yuting
>
> On Jun 26, 2015, at 4:15 PM, Steve Reinhardt wrote:
>
> You
You can certainly include devices in your system configuration regardless
of whether you're using SE or FS mode. Without a device driver, though,
it's tricky to actually use use device, unless you've explicitly designed
the device for user-mode access, or if your application has its own device
driv
Hi everyone,
I've uploaded and linked the presentations from last week's gem5 user
workshop on the wiki page:
http://www.gem5.org/User_workshop_2015#Final_Program
Cong Ma has promised to send updated slides, so I have not posted his talk
yet. Please let me know if there are any problems with any
on is it possible to write/make a scheduler as a new
> component by myself? It just does the simplest scheduling work.
>
> Thanks!
>
> M.Y. Lin
>
>
> On Sun, 14 Jun 2015 14:06:45 +, Steve Reinhardt wrote
> > There is no scheduler in SE mode. The number of hardware
There is no scheduler in SE mode. The number of hardware thread contexts
(which is the same as the number of cores, unless you have SMT enabled in
O3) must be >= the number of software threads that get created, so each
software thread gets its own dedicated hardware context and no scheduling
is ne
gem5 is a simulator, not an emulator. Nevertheless, those times seem very
long, unless you are using the detailed CPU model or something.
Steve
On May 5, 2015 9:58 PM, "Junaid Shuja" wrote:
> Hi,
> I was trying to find out boot time of different gem5 (opt, fast) build
> options. The gem5.opt boo
Hi Tod,
Thanks for being willing to share your experiences and configurations. The
best way to do that is via the gem5 wiki; you can just create an account
and then create a page to hold your information.
Let us know if you have any questions.
Thanks,
Steve
On Sun, May 3, 2015 at 5:16 AM, To
Writebacks do not have virtual addresses, as only the physical address is
available in the cache tag.
Steve
On Fri, Apr 17, 2015 at 10:59 AM, Vinayak Bhargav Srinath
wrote:
> Hi folks,
>
> Currently, using a work around to prevent this failure incase there is no
> Vaddr in the pkt->req by using
You don't need any permission to edit the wiki... just click "Create
Account" in the upper right and go for it.
I strongly encourage you to put as much information as possible on the wiki
directly so that (1) others can easily update/extend it and (2) we don't
have to worry about any links going s
If you look in src/dev/Ethernet.py, you'll see there's a 'speed' parameter
on the EtherLink object that lets you set the simulated bandwidth.
The details of how you set this will vary depending on your config script,
but if you're using the makeDualRoot() function from configs/example/fs.py,
then
Unfortunately the page you found is an orphan created during an unfinished
attempt to reorganize the documentation. The original (complete, though
somewhat out-of-date) documentation is here:
http://www.m5sim.org/ISA_description_system
In particular, the 'format' section is covered here:
http://w
or the information
>
> On Sat, Nov 22, 2014 at 2:20 PM, Mitch Hayenga <
> mitch.hayenga+g...@gmail.com> wrote:
>
>> Have you tried running with the O3CPUAll debug flag? That may shed some
>> more light on whats happening. Steve's suggestion sounds like a
>> po
;t get any writes,
> but they they have writebacks, which are only for evicted dirty lines
> or uncached writes? It's ARM FS mode running BBench.
>
> Jack Harvard
>
>
> On Tue, Oct 7, 2014 at 9:22 PM, Steve Reinhardt via gem5-users
> wrote:
> > Yes, in FS mode th
I don't recall the details, but there's some issue with data accesses and
instruction fetches sharing the same port to memory with O3 that leads to a
livelock (or deadlock?) situation... something like you need to do an
ifetch to make forward progress, but every time the port is free you
re-issue a
You'll probably have to modify the linker flags toward the bottom of
src/SConscript.
Steve
On Wed, Nov 12, 2014 at 4:44 PM, ni...@outlook.com via gem5-users <
gem5-users@gem5.org> wrote:
> Thanks, you means add this in sconstruct file or just in the command line?
>
> by the way, i tried to add
then simulation runs
>> fine without errors but I get '0' values in the result. This is
>> understandable. But If I do writeBlob then I always get page fault
>> exception exactly at the same clock tick (just before the benchmark
>> finishing the execution)
>>
, line 160]
> Memory Usage: 11788528 KBytes
> Program aborted at tick 771687885000
>
> Any ideas why 0x2800 range is getting problems by writeBlob?
>
> Thanks.
>
>
>
> On 7 October 2014 15:20, Steve Reinhardt wrote:
>
>> We have a patch internally that implements mo
Actually curTick is not a global variable; see src/sim/core.hh:
inline Tick curTick() { return _curEventQueue->getCurTick(); }
On Thu, Oct 23, 2014 at 2:57 AM, fela via gem5-users
wrote:
> Hi everyone,
>
> I didn't find any response to my question so I reformulate it.
> curTick is a global v
re threads. Is there a way to
> migrate software threads from simulation side?
>
> Thanks.
> Sanem.
>
> Steve Reinhardt
>
>
> The error you're seeing in your second email is precisely because you're
>> no
>> longer using drain(). Basically you're
as never worked before.
> src/mem/cache/cahe_impl.hh is the generic code used by all the caches.
> where can I go to only print trace for L2 cache and ignore icache and
> dcache?
>
> On Sat, Oct 11, 2014 at 12:43 PM, Steve Reinhardt
> wrote:
>
>> It's a little convolute
vents must happen at least one simulation quantum into the future,
* otherwise they risk being scheduled in the past by
* handleAsyncInsertions().
On Tue, Oct 14, 2014 at 2:18 AM, fela via gem5-users
wrote:
>
>
> Steve Reinhardt via gem5-users gem5.org> writes:
>
> >
> >
Have you looked at the comments in src/sim/eventq.hh?
Are you interested in parallel simulation or the default single-threaded
case?
Steve
On Mon, Oct 13, 2014 at 3:29 AM, fela via gem5-users
wrote:
> Hi everyone!
>
> I'm trying to understand the simulation core of gem5. Due to the lack of
> d
i only have one ignore objects.
> so can u do me a favor and use the above command in your new version (with
> trace* replaced by debug* of course) and see if it's a version issue? if
> not where can I go to fix this?
>
> On Sat, Oct 11, 2014 at 9:37 AM, Steve Reinhar
The error you're seeing in your second email is precisely because you're no
longer using drain(). Basically you're in trouble if you switch CPUs while
there's a cache miss outstanding, because then the cache miss response will
come back to the wrong (old) CPU. The point of drain() is to put the
s
Sometimes you've got to use the source... from src/python/m5/main.py:
option("--debug-ignore", metavar="EXPR", action='append', split=':',
help="Ignore EXPR sim objects")
Apparently colon is supposed to be the delimiter. The 'split' option is a
Nate extension (see src/python/m5/optio
For swaptions you can try increasing the available simulated memory with
the --mem-size option.
The others it's not so clear. SE mode doesn't support delayed memory
allocation, so if canneal is really trying to mmap 0x7fff7000 bytes (almost
32 GB) of address space, you're pretty much out of luck.
5. Is it bulky at the moment ?! In other words, OS is
> the only page table manager in FS mode.?! I've seen you add PageTableEntry
> to the new released code! But I could not guess what is the reason behind
> that.
>
> On Tue, Oct 7, 2014 at 6:03 PM, Steve Reinhardt wrote:
>
Are you talking about SE or FS mode? In SE mode, typically the
ISA-independent PageTable class is used to hold the page tables, and no
walker is needed. In FS mode, the page tables are constructed in the
simulated physical memory by the OS running on the simulated platform; we
use the page-table
We have a patch internally that implements more of mmap(), but
unfortunately it's not quite ready to post.
If you just want to do a read mapping (you don't care if writes to the
mmap'd region get written back to disk), and you don't mind just reading
the whole mmap region in up front (which you ne
Even FS simulation should be deterministic. Although slight changes in
inputs can have a significant effect if they cause changes in the order of
locks etc., with *identical* inputs the simulation should produce
*identical* results.
Steve
On Wed, Sep 10, 2014 at 6:42 PM, biswabandan panda via ge
I'll mention that gem5 does have the foundation for parallelizing a single
simulation across multiple cores; see for example
http://repo.gem5.org/gem5/rev/2cce74fe359e. However, if you want to model
a non-trivial configuration (i.e., one where there is communication between
threads), then you have
ory) works with both ARM
> and ALPHA; would it take significant effort to make it work for x86 as well?
>
> Thanks,
>
> Ivan
>
>
> On Sun, Jun 29, 2014 at 1:47 AM, Steve Reinhardt wrote:
>
>> x86 multi-core with O3 and the classic memory system doesn't work, as the
>
gt; [] vma_merge+0x1c4/0x2a0
> [] schedule+0x134/0x35a
> [] do_brk+0x1aa/0x380
> [] error_exit+0x0/0x84
>
>
> Code: 48 89 07 48 89 47 08 48 89 47 10 48 89 47 18 48 89 47 20 48
> RIP [] clear_page+0x12/0x40
> RSP
> CR2: 49485c48fc01b000
> note: spec.astar_base[849] exited w
Sorry, we primarily use SE mode, so we don't have this problem. Is this
for a single-core system? Is the error message you see from the kernel or
from gem5?
Steve
On Sat, Jun 28, 2014 at 6:51 PM, Ivan Stalev via gem5-users <
gem5-users@gem5.org> wrote:
> Is anyone successfully running SPEC200
Clone the repository and use 'hg update' with the -r or -d option to get an
older revision.
Steve
On Sun, Jun 22, 2014 at 11:10 PM, Nihar Rathod via gem5-users <
gem5-users@gem5.org> wrote:
> Hi all,
>
> Where can I find older versions of gem5?
> I want gem5 version of year 2012.
>
> Thanks in
If it used to work, and has stopped working, then 'hg bisect' is very
useful to identify exactly where it broke.
Steve
On Sat, Jun 21, 2014 at 1:30 AM, Choi, Wonje via gem5-users <
gem5-users@gem5.org> wrote:
> Hi Castillo,
>
> After the simulation was terminated with deadlock message, I have
We just ran into this ourselves very recently. We haven't tracked it down,
but our suspicion is that there's a bug in the default Ruby protocol
(MI_example) that is somehow triggered by the newer version of glibc, or
perhaps by the code generated by the newer version of gcc.
Please try another Ru
Thanks for all the digging into this issue. Are there bugs/changes in gem5
that can be fixed to address this, e.g., changing what's reported by CPUID,
or changing some of the parameters in the system configuration? If so,
please let us know so that we can update the code.
Thanks,
Steve
On Fr
The simulator no longer needs to be compiled in full-system or system-call
emulation mode; the same binary now supports both. Wherever you read about
compiling in full-system mode is out of date.
Steve
On Thu, May 22, 2014 at 10:58 PM, Ravi Verma via gem5-users <
gem5-users@gem5.org> wrote:
>
If core 0's exclusive request reaches the L1-L2 bus before core 1's, then
core 0 should suppress the cache response to core 1 and deliver the block
directly via a cache-to-cache transfer after it receives (and writes to)
its exclusive copy. The L2 would not end up with two MSHR targets, just
the o
s good, i will wait for that patch.
>
> Thank you.
> Adrian
>
> El lun, 05-05-2014 a las 09:22 -0700, Steve Reinhardt via gem5-users
> escribió:
> > We have an internal patch that generates an exclusive prefetch when a
> > store is issued, which greatly relieves the store
Hi all,
>>>
>>> I have no specific knowledge on what are the buffers modeling or what
>>> they should be modeling, but I too have encountered this issue some time
>>> ago. Setting a high wbDepth is what I do to work around it (actually, 3 is
>>> sufficient f
Hi Paul,
I assume you're talking about the 'wbMax' variable? I don't recall it
specifically myself, but after looking at the code a bit, the best I can
come up with is that there's assumed to be a finite number of buffers
somewhere that hold results from the function units before they write back
We have an internal patch that generates an exclusive prefetch when a store
is issued, which greatly relieves the store bottleneck. We were in the
process of getting it cleaned up to post but things got bogged down
somewhere. I'm going to go see what happened to it and if we can revive it.
Steve
There's no fundamental reason it shouldn't work, though it likely hasn't
been tested. It's just a matter of adding a NIC to the PCI bus in an x86
system (if there isn't already one there), using the existing dual-node
framework to instantiate two of those systems, and hooking up the two NICs
with
If you're in syscall emulation mode, each workload is treated completely
independently. So for your example of 2X and 5Y benchmarks, you're really
running 7 independent benchmarks from gem5's point of view. Unfortunately,
this leads to some inaccuracy, since things like code that would be shared
a
You can use the '-d' option to put all of the output files in a different
directory for each run.
Steve
On Fri, Feb 7, 2014 at 9:13 PM, Tod wrote:
> It would be pretty easier and nicer, if you write a shell script that
> do many simulations for you in a loop, then you can copy the generated
std::memcpy(blk->data, pkt->getPtr(), blkSize) copies an entire
cache block, which is what you want if you are receiving a cache miss
response (as in handleFill) or processing a writeback (as in access).
pkt->writeDataToBlock(blk->data, blkSize) handles writes that are smaller
than a cache block,
Yes, in SE multicore mode you can either run multithreaded workloads (where
all threads share the same memory) or multiprogrammed workloads (where each
thread has its own memory). Right now there's no facility for sharing part
of the workload but not all of it (e.g., just the binary). That could
Yes, I can reproduce that... thanks for pointing it out.
Steve
On Wed, Jan 15, 2014 at 9:43 AM, Patrick wrote:
> Hello,
>
> Some of the pages in the documentation section of the wiki appear not to
> be working. For example, I am unable to access the "Introduction" page
> under the "Getting St
The build directory will not be created until the first time you run scons.
Steve
On Tue, Jan 14, 2014 at 3:18 AM, abbas abdolali pour wrote:
> Hello all,
>
> Previously I've installed the GEM5 on centOs and all directories and other
> folders are exists inside the gem5 folder like /build and I
Those options were renamed --debug-file and --debug-start at some point,
but that rename hasn't made it into the stable version yet.
Steve
On Mon, Jan 13, 2014 at 4:24 PM, Aditya Deshpande <
adityamdeshpa...@gmail.com> wrote:
> Hi,
>
> I was using gem5-stable version, the build had a --trace-fi
The current APIC address mapping is tied to where the kernel expects to see
it, i.e., where it is on the standard PC platform. If you want to support
larger memories, you'd have to figure out what the physical address map
looks like for PCs with >3GB.
Steve
On Mon, Jan 13, 2014 at 3:56 AM, Ahma
You should not modify files under the build directory. The source files
you see there are either auto-generated code or just links to the ones
under src.
SCons automatically tracks dependencies, so if you make changes to the
source files under src, then re-running the scons command will take the
ilar way as an
> integer would be done, like this:
>
> example_int = param.Int("Description")
> example_vector = param.VectorParam("Description")
>
> Best regards,
>
> Alex Tomala
>
> --
> *From:* Steve Reinhardt
> *To:*
n and add the list to the ChildStates
> as a parameter. I am wondering how the python list can be converted to a
> C++ vector object, as I do not know how.
>
> Best regards,
>
> Alex
>
> --
> *From:* Steve Reinhardt
> *To:* Alex Tomala
s your problem here.
Steve
On Thu, Jan 2, 2014 at 3:48 PM, Alex Tomala wrote:
> The method seems to show up in both files, which I have attached to this
> email. Looking over the SWIG documentation briefly, I found no problems
> with the files.
>
> - Alex
>
> -
_
> % (self.__class__.__name__, attr)
> AttributeError: object 'ChildStates' has no attribute 'addChild'
>
> Best regards,
>
> Alex Tomala
> --
> *From:* Steve Reinhardt
> *To:* Alex Tomala
> *Cc:* gem5 users m
done.
>
> One day I may add some information to the wiki, but I am a tad busy now.
>
> Best regards,
>
> Alex Tomala
> --
> *From:* Steve Reinhardt
> *To:* Alex Tomala ; gem5 users mailing list <
> gem5-users@gem5.org>
> *Sent:* Thursday, January
What have you tried and what problems are you having? It should just be a
matter of calling 'obj.addChild(x)' on your ChildStates object 'obj'.
It looks like you've figured most of this out already, but a lot of the
current mechanism was added in this changeset (see particularly the
comments in t
Take a look at the existing devices in src/dev. A lot of the functionality
you need is encapsulated in the base classes in that directory.
See http://www.gem5.org/docs/html/classPioDevice.html for a class hierarchy
chart.
Steve
On Fri, Dec 20, 2013 at 12:59 AM, Erfan Azarkhish wrote:
> Dear
The --trace-file option was recently renamed to --debug-file to be
consistent with other arguments. All the --trace-* arguments are now
--debug-*.
Steve
On Wed, Nov 27, 2013 at 3:57 PM, Xing Niu wrote:
> Hi,
>
> When running GEM5 + Dramsim2 + PARSEC, I encounter a problem: m5.opt:
> error: n
> Hi,
>
> I am confused about the way iocache works. I saw the previous explanation :
>
>
> * On Nov 7, 2008, at 7:28 PM, Steve Reinhardt wrote: > Yes, the whole
> reason for having an IO cache is to make device > accesses work in coherent
> space. An IO cache isn
AMD Research is seeking student interns interested in extending and
enhancing gem5 for projects focusing on the detailed design of
high-performance network interfaces. We are looking for candidates who
have hands-on experience with gem5, as well as one or more of the following:
- I/O device model
Are you using the default configuration parameters? Are you using the
classic cache model or Ruby?
There are a handful of performance model issues with the combination of O3
and x86, particularly if you're using Ruby. We have some patches internally
here at AMD that we are working to clean up and
./astar & taskset -pc 1 ./bzip
> but the program terminates early.
> Can you show me the exact command I should use?
>
> Thanks,
> Yanqi
> --
> *From:* gem5-users-boun...@gem5.org [gem5-users-boun...@gem5.org] on
> behalf of Steve Reinhardt [ste.
If you're in FS mode, then thread scheduling is controlled by Linux. You
can run as many programs as you want, just like on a real Linux system, and
if you have more runnable threads than cores, they will be time-sliced by
the kernel using its internal thread scheduling algorithm.
Your ability to
I'm not sure exactly what you're asking, but I do almost all my gem5
debugging using xemacs gdb-mode. It works fine for the C++ parts. You can
use pdb on the python parts, but sometimes I find it's easier just to add
print statements when debugging python.
Steve
On Fri, Aug 9, 2013 at 4:58 AM,
a) You missed a dot... it's referring to the "port" on the object
"system.physmem"
b) "m5" is a python package, not an object
On Sat, Jul 13, 2013 at 8:04 AM, Zheng Wu wrote:
> Hi All,
>
> I am reading the configuration scripts in python and I am having a
> difficult time finding all the attr
1 - 100 of 287 matches
Mail list logo