mitiskuma commented on PR #18877:
URL: https://github.com/apache/tvm/pull/18877#issuecomment-4008961190
Compute-only batching (no staging pool) vs baseline:
- 0.5B: 1.18x (vs 1.95x with pool)
- 1.5B: 0.92x (regression)
- 3B: 0.99x (no improvement)
- 8B: 0.98x (no improvement)
Without the staging pool, each CPU->GPU copy flushes the pending compute
encoder, so the batching benefit is lost on models with many interleaved
copies. The staging pool is what makes batching effective. That said, I have
not tested a middle ground where CPU->GPU copies use queue.writeBuffer-style
semantics (memcpy into a shared-mode destination without a blit encoder) which
would avoid both the flush and the staging pool complexity and honestly looks a
lot of work for not much added clarity.
lmk about the rest
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]