On Wed, 24 Oct 2012, Malek Musleh wrote:

Hi Nilay,

Yes I agree with those comments, which I is why I posted my question
in the first place because it seemed a bit odd to have to prefetchers
enabled/issued from the sequencer.

Like I said, I have gotten the M5 prefetchers to work with ruby
issuing prefetches to the L1 from Sequencer, but there is no way to do
it for L2. Using the CacheMemory would probably be a more ideal
external structure from which prefetches could be issued (either on
access or only on misses).
I am going to try and see how that route turns out.

The issue of not supporting aliased requests such that demand requests
are stalled because prefetches are pending, that issue would still be
there right in the protocol files?The cache block would be in some
transient state, and although the sequencer would allow the demand
request to be enqueued, the actual demand request might end up
recycling/stalled in some transient state. Thus the problem of
backlogging demand requests behind prefetches would still exist, the
only difference between is that the Sequencer RequestTable would not
be aware of the cache initiated prefetch request.

This is not necessary. You can merge / drop requests depending on the situation. But now that I am thinking about it, this should be possible in the Sequencer as well.

--
Nilay
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to