terrymanu commented on issue #37569:
URL: 
https://github.com/apache/shardingsphere/issues/37569#issuecomment-3697970661

     Thanks for the detailed info; sharing how we look at this:
   
     - Threads: Proxy doesn’t create or split client threads. If sysbench runs 
200 threads, both single and sharded setups see 200.
     - Frontend vs backend connections: Client→Proxy are frontend; backend 
pools are per data source. With two shards and maxPoolSize=200, Proxy can open 
up to 200 per shard (~400 total), but actual usage follows routing/concurrency.
     - Performance lift: It comes from sharding-key routing that spreads 
requests across independent nodes, cutting per-node CPU/IO/lock contention—not 
from doubling thread/connection counts.
     - Fair comparison: Keep the same client concurrency (e.g., 
`--threads=200`). Size each shard’s pool so it isn’t the bottleneck (e.g., 
100–200 per node depending on skew/spikes). Near-2× is realistic only when the 
sharding key is evenly distributed and SQLs are single-shard; 
cross-shard/broadcast will lower the gain.
   
     Summary: More Proxy instances (behind LVS or similar) let you fan out more 
frontend connections and raise end-to-end throughput. More DB shards raise 
total backend connections while keeping roughly the same per-DB pool, letting 
you break a single-DB bottleneck on CPU/IO/locks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to