On 12/12/2019 06.25, lampahome wrote:
Jon Haddad <j...@jonhaddad.com <mailto:j...@jonhaddad.com>> 於
2019年12月12日 週四 上午12:42寫道:
I'm not sure how you're measuring this - could you share your
benchmarking code?
s the details of theri?
start = time.time()
for i in range(40960):
prep = session.prepare(query, (args))
session.execute(prep) # or session.execute_async(prep)
print('time', time.time()-start)
Just like above code snippet.
I almost cost time by execute_async() more than normal execute().
I think you're just exposing Python and perhaps driver weaknesses.
With .execute(), memory usage stays constant and you suffer the round
trip time once per loop.
With .execute_async(), memory usage grows, and if there is any algorithm
in the driver that is not O(1) (say to maintain the outstanding request
table), execution time grows as you push more and more requests. The
thread(s) that process responses have to contend with the request
issuing thread over locks. You don't suffer the round trip time, but
from your results the other issues dominate.
If you also collected responses in your loop, and also bound the number
of outstanding requests to a reasonable number, you'll see execute_async
performing better. You'll see even better performance if you drop Python
for a language more suitable for the data plane.