Hi Nilay,

Yes I agree with those comments, which I is why I posted my question
in the first place because it seemed a bit odd to have to prefetchers
enabled/issued from the sequencer.

Like I said, I have gotten the M5 prefetchers to work with ruby
issuing prefetches to the L1 from Sequencer, but there is no way to do
it for L2. Using the CacheMemory would probably be a more ideal
external structure from which prefetches could be issued (either on
access or only on misses).
I am going to try and see how that route turns out.

The issue of not supporting aliased requests such that demand requests
are stalled because prefetches are pending, that issue would still be
there right in the protocol files?The cache block would be in some
transient state, and although the sequencer would allow the demand
request to be enqueued, the actual demand request might end up
recycling/stalled in some transient state. Thus the problem of
backlogging demand requests behind prefetches would still exist, the
only difference between is that the Sequencer RequestTable would not
be aware of the cache initiated prefetch request.

In either case, (and looking at your recently posted prefetcher patch)
I don't see how (yet) one can avoid modifying the protocol files to
support L2 initiated prefetches.

Malek

On Wed, Oct 24, 2012 at 11:19 AM, Nilay Vaish <ni...@cs.wisc.edu> wrote:
> On Tue, 16 Oct 2012, Malek Musleh wrote:
>
>> Hi,
>>
>> I have a question about implementing prefetching with the Ruby Memory
>> Model. Looking at the Sequencer.cc code, there is a comment about how
>> hardware prefetches should be issued in the makeRequest() routine.
>> Now, I can sort of understand why hardware prefetches should be issued
>> from the sequencer (not necessarily where that comment is placed),
>> because that is the way normal cpu request get issued to ruby.
>>
>> In view of that, I have ported over the existing gem5 prefetcher
>> models (tagged, stride, ghb) from the Classic Memory to work with
>> Ruby. From initial testing it seems to work fine, but the issue I have
>> is that this only applies for L1 prefetching.
>>
>> Now, does it really make sense to implement prefetching at the L1
>> level different than the L2? Wouldn't it make sense that the
>> cacheMemory Module perform the prefetch->notify() and scheduke/issue
>> the prefetches in all cases perform? In otherwords, is the comment I
>> am referring to still valid, and what is the main justification for
>> having it issued from that location?
>>
>
> It is just a comment, you can chose to ignore it. Placement of prefetcher in
> the memory system is a matter of design. Different choices have different
> pros and cons.
>
> If you want to have prefetching capability added to all the protocols
> without having to modify them, then updating the sequencer is the way to go.
> But then, there are issues involved with adding prefetching to the
> sequencer. You might not want to push prefetches to the mandatory queue, as
> demand reads and writes should be preferred over prefetches. Normally
> prefetchers are trained on the miss trace, rather than on the entire trace.
> But if only the sequencer is modified, the prefetcher will train on the
> entire trace and would possibly issue prefetches that are not required.
> Sequencer, as of now, does not allow multiple requests for the same address
> to be outstanding, so a prefetch request might end up blocking a demand
> request.
>
> Changing a coherence protocol to add prefetching requires time and effort,
> and would need to be done separately for each protocol.
>
> --
> Nilay
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to