Hello,

I'm working on methods for simpler and faster cache modeling by batching
functional requests locally and sending them to Ruby as needed (L1 misses).
I have recently found that purely sending functional requests in many cases
has yielded slower simulation time compared to sending purely timing
requests into Ruby. Are functional requests inherently slow? Further, I was
hoping to find out if there is a way to batch functional requests for speed
up? More specifically, is there a way to add multiple read requests or
write requests on to a single packet?

For more background, I am hacking the gem5 simulator and feeding traces
into the memory hierarchy. I have tested sending all functional requests as
opposed to timing requests as to test the simulation time speed of a
functional request. I have used a few splash2 benchmarks (LU,
water-spatial, and barnes) and found that the simulation speed with only
functional requests was roughly 2x slower than the simulation speed with
timing requests.

Thanks,

Paco
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to