Hello, Our service (Truman) is using Go client APIs to read (and write) to
Bigtable. The nature of our traffic is such that periodically we will issue
a large amount of single row reads from one job to a replicated (2 cell)
bigtable in response to user requests.

In my tests i'm seeing very high bigtable read latencies using a single
task in the job thats accessing bigtable when a certain QPS is reached.
However, increasing the number of tasks, effectively distributing the read
traffic for the same total QPS, seems to alleviate this issue.

Is there a QPS limit or throttling going on in the client lamprey/envelope
on a per task basis? Perhaps some setting i've missed? Dapper traces show a
lot of time spent in LampreyBigtable.ForRead.

Reference cl/127674083. Most of the relevant stuff is in pods.go, store.go
and template.gcl

Any thoughts appreciated.

Thanks
-Rohit

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to