Hi Peter,

Thanks for sending this over. I dont know how 100 Bytes (10 bytes of data *
10 columns) can represent anything useful? These days it is better to
benchmark things around 1KB.

Thanks!

On Mon, Oct 31, 2016 at 4:58 PM, Peter Reilly <peter.kitt.rei...@gmail.com>
wrote:

> The original article
> http://techblog.netflix.com/2011/11/benchmarking-
> cassandra-scalability-on.html
>
>
> On Mon, Oct 31, 2016 at 5:57 PM, Peter Reilly <peter.kitt.rei...@gmail.com
> > wrote:
>
>> From the article:
>> java -jar stress.jar -d "144 node ids" -e ONE -n 27000000 -l 3 -i 1 -t
>> 200 -p 7102 -o INSERT -c 10 -r
>>
>> The client is writing 10 columns per row key, row key randomly chosen
>> from 27 million ids, each column has a key and 10 bytes of data. The total
>> on disk size for each write including all overhead is about 400 bytes.
>>
>> Note to sure able the batching - it may be one of the parameters to
>> stress.jar.
>>
>> Peter
>>
>> On Mon, Oct 31, 2016 at 4:07 PM, Kant Kodali <k...@peernova.com> wrote:
>>
>>> Hi Guys,
>>>
>>>
>>> I keep reading the articles below but the biggest questions for me are
>>> as follows
>>>
>>> 1) what is the "data size" per request? without data size it hard for me
>>> to see anything sensible
>>> 2) is there batching here?
>>>
>>> http://www.datastax.com/1-million-writes
>>>
>>> http://techblog.netflix.com/2014/07/revisiting-1-million-wri
>>> tes-per-second.html
>>>
>>> Thanks!
>>>
>>>
>>>
>>>
>>
>

Reply via email to