If you look in src/dev/Ethernet.py, you'll see there's a 'speed' parameter
on the EtherLink object that lets you set the simulated bandwidth.
The details of how you set this will vary depending on your config script,
but if you're using the makeDualRoot() function from configs/example/fs.py,
then
Unfortunately the page you found is an orphan created during an unfinished
attempt to reorganize the documentation. The original (complete, though
somewhat out-of-date) documentation is here:
http://www.m5sim.org/ISA_description_system
In particular, the 'format' section is covered here:
http://w
or the information
>
> On Sat, Nov 22, 2014 at 2:20 PM, Mitch Hayenga <
> mitch.hayenga+g...@gmail.com> wrote:
>
>> Have you tried running with the O3CPUAll debug flag? That may shed some
>> more light on whats happening. Steve's suggestion sounds like a
>> po
;t get any writes,
> but they they have writebacks, which are only for evicted dirty lines
> or uncached writes? It's ARM FS mode running BBench.
>
> Jack Harvard
>
>
> On Tue, Oct 7, 2014 at 9:22 PM, Steve Reinhardt via gem5-users
> wrote:
> > Yes, in FS mode th
I don't recall the details, but there's some issue with data accesses and
instruction fetches sharing the same port to memory with O3 that leads to a
livelock (or deadlock?) situation... something like you need to do an
ifetch to make forward progress, but every time the port is free you
re-issue a
You'll probably have to modify the linker flags toward the bottom of
src/SConscript.
Steve
On Wed, Nov 12, 2014 at 4:44 PM, ni...@outlook.com via gem5-users <
gem5-users@gem5.org> wrote:
> Thanks, you means add this in sconstruct file or just in the command line?
>
> by the way, i tried to add
It should always fault in the same place; the simulator is deterministic.
Faulting in advancePC() indicates that it's an instruction fetch that's
faulting, which is particularly unusual.
You should look at an execution trace to see why execution is heading off
into the weeds, if that's what's hap
I don't know... that's basically a page fault. Do you know the address
range that your file is mapped to? It may or may not be directly related
to mmap.
Steve
On Fri, Oct 31, 2014 at 6:33 AM, Ahmad Hassan
wrote:
> Hi Steve,
>
> I am running x96 SE mode. The writeBlob() works fine for very sma
Actually curTick is not a global variable; see src/sim/core.hh:
inline Tick curTick() { return _curEventQueue->getCurTick(); }
On Thu, Oct 23, 2014 at 2:57 AM, fela via gem5-users
wrote:
> Hi everyone,
>
> I didn't find any response to my question so I reformulate it.
> curTick is a global v
It's not obvious to me how to easily emulate a software thread migration
purely inside the simulator. Since the application is triggering the
thread migration anyway, why not do it all in software?
Steve
On Tue, Oct 14, 2014 at 7:11 AM, Sanem Arslan
wrote:
> Hi Steve,
>
> First of all thank yo
I never use the ignore option myself, so it may well be that there's
another bug in there somewhere.
Personally I would just dump the full trace and use grep to filter out the
L2 cache lines.
Steve
On Mon, Oct 13, 2014 at 11:24 PM, Marcus Tshibangu
wrote:
> Thanks Steve, your guess makes more
vents must happen at least one simulation quantum into the future,
* otherwise they risk being scheduled in the past by
* handleAsyncInsertions().
On Tue, Oct 14, 2014 at 2:18 AM, fela via gem5-users
wrote:
>
>
> Steve Reinhardt via gem5-users gem5.org> writes:
>
> >
> >
Have you looked at the comments in src/sim/eventq.hh?
Are you interested in parallel simulation or the default single-threaded
case?
Steve
On Mon, Oct 13, 2014 at 3:29 AM, fela via gem5-users
wrote:
> Hi everyone!
>
> I'm trying to understand the simulation core of gem5. Due to the lack of
> d
It's a little convoluted, but I think I found the problem. Apparently
having multiple ignore strings hasn't worked in quite some time, if ever.
In src/python/m5/main.py, the ignore strings are passed into C++ one at a
time:
for ignore in options.debug_ignore:
check_tracing()
The error you're seeing in your second email is precisely because you're no
longer using drain(). Basically you're in trouble if you switch CPUs while
there's a cache miss outstanding, because then the cache miss response will
come back to the wrong (old) CPU. The point of drain() is to put the
s
Sometimes you've got to use the source... from src/python/m5/main.py:
option("--debug-ignore", metavar="EXPR", action='append', split=':',
help="Ignore EXPR sim objects")
Apparently colon is supposed to be the delimiter. The 'split' option is a
Nate extension (see src/python/m5/optio
For swaptions you can try increasing the available simulated memory with
the --mem-size option.
The others it's not so clear. SE mode doesn't support delayed memory
allocation, so if canneal is really trying to mmap 0x7fff7000 bytes (almost
32 GB) of address space, you're pretty much out of luck.
Yes, in FS mode the OS is the only thing that manages the page tables.
Just like a real system.
On Tue, Oct 7, 2014 at 9:28 AM, mohammad reza Soltaniyeh <
m.soltani...@gmail.com> wrote:
> I am talking about FS mode. I couldn't get the point about page-table
> walker used in gem5. Is it bulky at t
Are you talking about SE or FS mode? In SE mode, typically the
ISA-independent PageTable class is used to hold the page tables, and no
walker is needed. In FS mode, the page tables are constructed in the
simulated physical memory by the OS running on the simulated platform; we
use the page-table
We have a patch internally that implements more of mmap(), but
unfortunately it's not quite ready to post.
If you just want to do a read mapping (you don't care if writes to the
mmap'd region get written back to disk), and you don't mind just reading
the whole mmap region in up front (which you ne
Even FS simulation should be deterministic. Although slight changes in
inputs can have a significant effect if they cause changes in the order of
locks etc., with *identical* inputs the simulation should produce
*identical* results.
Steve
On Wed, Sep 10, 2014 at 6:42 PM, biswabandan panda via ge
I'll mention that gem5 does have the foundation for parallelizing a single
simulation across multiple cores; see for example
http://repo.gem5.org/gem5/rev/2cce74fe359e. However, if you want to model
a non-trivial configuration (i.e., one where there is communication between
threads), then you have
It would be great to make this work. The key issue is that x86
synchronization is different from ARM & Alpha. The latter rely on
load-link/store-conditional, but x86 relies on the ability to do locked RMW
transactions that are guaranteed atomic. This is signaled to the cache
using the LOCKED flag
x86 multi-core with O3 and the classic memory system doesn't work, as the
classic caches don't have support for x86 locked accesses. In contrast,
x86 multi-core works with O3 and Ruby, since Ruby does support
locked accesses; and it also works with the AtomicSimple CPU model and
classic memory, si
Sorry, we primarily use SE mode, so we don't have this problem. Is this
for a single-core system? Is the error message you see from the kernel or
from gem5?
Steve
On Sat, Jun 28, 2014 at 6:51 PM, Ivan Stalev via gem5-users <
gem5-users@gem5.org> wrote:
> Is anyone successfully running SPEC200
Clone the repository and use 'hg update' with the -r or -d option to get an
older revision.
Steve
On Sun, Jun 22, 2014 at 11:10 PM, Nihar Rathod via gem5-users <
gem5-users@gem5.org> wrote:
> Hi all,
>
> Where can I find older versions of gem5?
> I want gem5 version of year 2012.
>
> Thanks in
If it used to work, and has stopped working, then 'hg bisect' is very
useful to identify exactly where it broke.
Steve
On Sat, Jun 21, 2014 at 1:30 AM, Choi, Wonje via gem5-users <
gem5-users@gem5.org> wrote:
> Hi Castillo,
>
> After the simulation was terminated with deadlock message, I have
We just ran into this ourselves very recently. We haven't tracked it down,
but our suspicion is that there's a bug in the default Ruby protocol
(MI_example) that is somehow triggered by the newer version of glibc, or
perhaps by the code generated by the newer version of gcc.
Please try another Ru
Thanks for all the digging into this issue. Are there bugs/changes in gem5
that can be fixed to address this, e.g., changing what's reported by CPUID,
or changing some of the parameters in the system configuration? If so,
please let us know so that we can update the code.
Thanks,
Steve
On Fr
The simulator no longer needs to be compiled in full-system or system-call
emulation mode; the same binary now supports both. Wherever you read about
compiling in full-system mode is out of date.
Steve
On Thu, May 22, 2014 at 10:58 PM, Ravi Verma via gem5-users <
gem5-users@gem5.org> wrote:
>
If core 0's exclusive request reaches the L1-L2 bus before core 1's, then
core 0 should suppress the cache response to core 1 and deliver the block
directly via a cache-to-cache transfer after it receives (and writes to)
its exclusive copy. The L2 would not end up with two MSHR targets, just
the o
s good, i will wait for that patch.
>
> Thank you.
> Adrian
>
> El lun, 05-05-2014 a las 09:22 -0700, Steve Reinhardt via gem5-users
> escribió:
> > We have an internal patch that generates an exclusive prefetch when a
> > store is issued, which greatly relieves the store
Hi all,
>>>
>>> I have no specific knowledge on what are the buffers modeling or what
>>> they should be modeling, but I too have encountered this issue some time
>>> ago. Setting a high wbDepth is what I do to work around it (actually, 3 is
>>> sufficient f
Hi Paul,
I assume you're talking about the 'wbMax' variable? I don't recall it
specifically myself, but after looking at the code a bit, the best I can
come up with is that there's assumed to be a finite number of buffers
somewhere that hold results from the function units before they write back
We have an internal patch that generates an exclusive prefetch when a store
is issued, which greatly relieves the store bottleneck. We were in the
process of getting it cleaned up to post but things got bogged down
somewhere. I'm going to go see what happened to it and if we can revive it.
Steve
35 matches
Mail list logo