I have a column family with 15 columns where there are timestamp,
timeuuid,  few text fields and rest int  fields.  If I calculate the size
of its column name  and it's value and divide 5kb (recommended max size for
batch) with the value,  I get result as 12. Is it correct?. Am I missing
something?

Thanks
Ajay
On 02-Mar-2015 12:13 pm, "Ankush Goyal" <ank...@gmail.com> wrote:

> Hi Ajay,
>
> I would suggest, looking at the approximate size of individual elements in
> the batch, and based on that compute max size (chunk size).
>
> Its not really a straightforward calculation, so I would further suggest
> making that chunk size a runtime parameter that you can tweak and play
> around with until you reach stable state.
>
> On Sunday, March 1, 2015 at 10:06:55 PM UTC-8, Ajay Garga wrote:
>>
>> Hi,
>>
>> I am looking at a way to compute the optimal batch size in the client
>> side similar to the below mentioned bug in the server side (generic as we
>> are exposing REST APIs for Cassandra, the column family and the data are
>> different each request).
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-6487
>> <https://www.google.com/url?q=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FCASSANDRA-6487&sa=D&sntz=1&usg=AFQjCNGOSliZnS1idXqTHXIr7aNfEN3mMg>
>>
>> How do we compute(approximately using ColumnDefintions or ColumnMetadata)
>> the size of a row of a column family from the client side using Cassandra
>> Java driver?
>>
>> Thanks
>> Ajay
>>
>  To unsubscribe from this group and stop receiving emails from it, send an
> email to java-driver-user+unsubscr...@lists.datastax.com.
>

Reply via email to