Re: high write latency on a single table

2019-07-22 Thread Ben Slater
Is the size of the data in your “state” column variable? The higher write
latencies at the 95%+ could line up with large volumes of data for
particular rows in that column (the one column not in both tables)?

Cheers
Ben

---


*Ben Slater**Chief Product Officer*



   


Read our latest technical blog posts here
.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


On Mon, 22 Jul 2019 at 16:46, CPC  wrote:

> Hi guys,
>
> Any idea? I thought it might be a bug but could not find anything related
> on jira.
>
> On Fri, Jul 19, 2019, 12:45 PM CPC  wrote:
>
>> Hi Rajsekhar,
>>
>> Here the details:
>>
>> 1)
>>
>> [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY
>> Total number of tables: 259
>> 
>> Keyspace : tims
>> Read Count: 208256144
>> Read Latency: 7.655146714749506 ms
>> Write Count: 2218205275
>> Write Latency: 1.7826005103175133 ms
>> Pending Flushes: 0
>> Table: MESSAGE_HISTORY
>> SSTable count: 41
>> Space used (live): 976964101899
>> Space used (total): 976964101899
>> Space used by snapshots (total): 3070598526780
>> Off heap memory used (total): 185828820
>> SSTable Compression Ratio: 0.8219217809913125
>> Number of partitions (estimate): 8175715
>> Memtable cell count: 73124
>> Memtable data size: 26543733
>> Memtable off heap memory used: 27829672
>> Memtable switch count: 1607
>> Local read count: 7871917
>> Local read latency: 1.187 ms
>> Local write count: 172220954
>> Local write latency: 0.021 ms
>> Pending flushes: 0
>> Percent repaired: 0.0
>> Bloom filter false positives: 130
>> Bloom filter false ratio: 0.0
>> Bloom filter space used: 10898488
>> Bloom filter off heap memory used: 10898160
>> Index summary off heap memory used: 2480140
>> Compression metadata off heap memory used: 144620848
>> Compacted partition minimum bytes: 36
>> Compacted partition maximum bytes: 557074610
>> Compacted partition mean bytes: 155311
>> Average live cells per slice (last five minutes): 
>> 25.56639344262295
>> Maximum live cells per slice (last five minutes): 5722
>> Average tombstones per slice (last five minutes): 
>> 1.8681948424068768
>> Maximum tombstones per slice (last five minutes): 770
>> Dropped Mutations: 97812
>>
>> 
>> [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY_STATE
>> Total number of tables: 259
>> 
>> Keyspace : tims
>> Read Count: 208257486
>> Read Latency: 7.655137315414438 ms
>> Write Count: 2218218966
>> Write Latency: 1.7825896304427324 ms
>> Pending Flushes: 0
>> Table: MESSAGE_HISTORY_STATE
>> SSTable count: 5
>> Space used (live): 6403033568
>> Space used (total): 6403033568
>> Space used by snapshots (total): 19086872706
>> Off heap memory used (total): 6727565
>> SSTable Compression Ratio: 0.271857664111622
>> Number of partitions (estimate): 1396462
>> Memtable cell count: 77450
>> Memtable data size: 620776
>> Memtable off heap memory used: 1338914
>> Memtable switch count: 1616
>> Local read count: 988278
>> Local read latency: 0.518 ms
>> Local write count: 109292691
>> Local write latency: 11.353 ms
>> Pending flushes: 0
>> Percent repaired: 0.0
>> Bloom filter false positives: 0
>> Bloom filter false ratio: 0.0
>> Bloom filter space used: 1876208
>> Bloom filter off heap memory used: 1876168
>> Index summary off heap memory used: 410747
>> Compression metadata off heap memory used: 3101736
>> Compacted partition minimum bytes: 36
>> Compacted partition maximum bytes

Re: high write latency on a single table

2019-07-22 Thread Rajsekhar Mallick
Hello Team,

The difference in write latencies between both the tables though
significant,but the higher latency being 11.353 ms is still acceptable.

Overall Writes not being an issue, but write latency for this particular
table on the higher side does point towards data being written to the table.
Few things which I noticed, is the data in cell count column in nodetool
tablehistogram o/p for message_history_state table is scattered.
The partition size histogram for the tables is consistent, but the column
count histogram for the impacted table isn't uniform.
May be we can start thinking on these lines.

I would also wait for some expert advice here.

Thanks


On Mon,the 22 Jul, 2019, 12:31 PM Ben Slater, 
wrote:

> Is the size of the data in your “state” column variable? The higher write
> latencies at the 95%+ could line up with large volumes of data for
> particular rows in that column (the one column not in both tables)?
>
> Cheers
> Ben
>
> ---
>
>
> *Ben Slater**Chief Product Officer*
>
> 
>
>    
>
>
> Read our latest technical blog posts here
> .
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
>
> On Mon, 22 Jul 2019 at 16:46, CPC  wrote:
>
>> Hi guys,
>>
>> Any idea? I thought it might be a bug but could not find anything related
>> on jira.
>>
>> On Fri, Jul 19, 2019, 12:45 PM CPC  wrote:
>>
>>> Hi Rajsekhar,
>>>
>>> Here the details:
>>>
>>> 1)
>>>
>>> [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY
>>> Total number of tables: 259
>>> 
>>> Keyspace : tims
>>> Read Count: 208256144
>>> Read Latency: 7.655146714749506 ms
>>> Write Count: 2218205275
>>> Write Latency: 1.7826005103175133 ms
>>> Pending Flushes: 0
>>> Table: MESSAGE_HISTORY
>>> SSTable count: 41
>>> Space used (live): 976964101899
>>> Space used (total): 976964101899
>>> Space used by snapshots (total): 3070598526780
>>> Off heap memory used (total): 185828820
>>> SSTable Compression Ratio: 0.8219217809913125
>>> Number of partitions (estimate): 8175715
>>> Memtable cell count: 73124
>>> Memtable data size: 26543733
>>> Memtable off heap memory used: 27829672
>>> Memtable switch count: 1607
>>> Local read count: 7871917
>>> Local read latency: 1.187 ms
>>> Local write count: 172220954
>>> Local write latency: 0.021 ms
>>> Pending flushes: 0
>>> Percent repaired: 0.0
>>> Bloom filter false positives: 130
>>> Bloom filter false ratio: 0.0
>>> Bloom filter space used: 10898488
>>> Bloom filter off heap memory used: 10898160
>>> Index summary off heap memory used: 2480140
>>> Compression metadata off heap memory used: 144620848
>>> Compacted partition minimum bytes: 36
>>> Compacted partition maximum bytes: 557074610
>>> Compacted partition mean bytes: 155311
>>> Average live cells per slice (last five minutes): 
>>> 25.56639344262295
>>> Maximum live cells per slice (last five minutes): 5722
>>> Average tombstones per slice (last five minutes): 
>>> 1.8681948424068768
>>> Maximum tombstones per slice (last five minutes): 770
>>> Dropped Mutations: 97812
>>>
>>> 
>>> [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY_STATE
>>> Total number of tables: 259
>>> 
>>> Keyspace : tims
>>> Read Count: 208257486
>>> Read Latency: 7.655137315414438 ms
>>> Write Count: 2218218966
>>> Write Latency: 1.7825896304427324 ms
>>> Pending Flushes: 0
>>> Table: MESSAGE_HISTORY_STATE
>>> SSTable count: 5
>>> Space used (live): 6403033568
>>> Space used (total): 6403033568
>>> Space used by snapshots (total): 19086872706
>>> Off heap memory used (total): 6727565
>>> SSTable Compression Ratio: 0.271857664111622
>>> Number of partitions (estimate): 1396462
>>> Memtable cell count: 77450
>>> Memta

Re: high write latency on a single table

2019-07-22 Thread CPC
Hi everybody,

State column contains "R" or "D" values. Just a single character. As Rajsekhar
said, only difference is the table can contain high number of cell count.
In the mean time we made a major compaction and data per node was 5-6 gb.

On Mon, Jul 22, 2019, 10:56 AM Rajsekhar Mallick 
wrote:

> Hello Team,
>
> The difference in write latencies between both the tables though
> significant,but the higher latency being 11.353 ms is still acceptable.
>
> Overall Writes not being an issue, but write latency for this particular
> table on the higher side does point towards data being written to the table.
> Few things which I noticed, is the data in cell count column in nodetool
> tablehistogram o/p for message_history_state table is scattered.
> The partition size histogram for the tables is consistent, but the column
> count histogram for the impacted table isn't uniform.
> May be we can start thinking on these lines.
>
> I would also wait for some expert advice here.
>
> Thanks
>
>
> On Mon,the 22 Jul, 2019, 12:31 PM Ben Slater, 
> wrote:
>
>> Is the size of the data in your “state” column variable? The higher write
>> latencies at the 95%+ could line up with large volumes of data for
>> particular rows in that column (the one column not in both tables)?
>>
>> Cheers
>> Ben
>>
>> ---
>>
>>
>> *Ben Slater**Chief Product Officer*
>>
>> 
>>
>> 
>> 
>> 
>>
>> Read our latest technical blog posts here
>> .
>>
>> This email has been sent on behalf of Instaclustr Pty. Limited
>> (Australia) and Instaclustr Inc (USA).
>>
>> This email and any attachments may contain confidential and legally
>> privileged information.  If you are not the intended recipient, do not copy
>> or disclose its content, but please reply to this email immediately and
>> highlight the error to the sender and then immediately delete the message.
>>
>>
>> On Mon, 22 Jul 2019 at 16:46, CPC  wrote:
>>
>>> Hi guys,
>>>
>>> Any idea? I thought it might be a bug but could not find anything
>>> related on jira.
>>>
>>> On Fri, Jul 19, 2019, 12:45 PM CPC  wrote:
>>>
 Hi Rajsekhar,

 Here the details:

 1)

 [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY
 Total number of tables: 259
 
 Keyspace : tims
 Read Count: 208256144
 Read Latency: 7.655146714749506 ms
 Write Count: 2218205275
 Write Latency: 1.7826005103175133 ms
 Pending Flushes: 0
 Table: MESSAGE_HISTORY
 SSTable count: 41
 Space used (live): 976964101899
 Space used (total): 976964101899
 Space used by snapshots (total): 3070598526780
 Off heap memory used (total): 185828820
 SSTable Compression Ratio: 0.8219217809913125
 Number of partitions (estimate): 8175715
 Memtable cell count: 73124
 Memtable data size: 26543733
 Memtable off heap memory used: 27829672
 Memtable switch count: 1607
 Local read count: 7871917
 Local read latency: 1.187 ms
 Local write count: 172220954
 Local write latency: 0.021 ms
 Pending flushes: 0
 Percent repaired: 0.0
 Bloom filter false positives: 130
 Bloom filter false ratio: 0.0
 Bloom filter space used: 10898488
 Bloom filter off heap memory used: 10898160
 Index summary off heap memory used: 2480140
 Compression metadata off heap memory used: 144620848
 Compacted partition minimum bytes: 36
 Compacted partition maximum bytes: 557074610
 Compacted partition mean bytes: 155311
 Average live cells per slice (last five minutes): 
 25.56639344262295
 Maximum live cells per slice (last five minutes): 5722
 Average tombstones per slice (last five minutes): 
 1.8681948424068768
 Maximum tombstones per slice (last five minutes): 770
 Dropped Mutations: 97812

 
 [cassadm@bipcas00 ~]$ nodetool tablestats tims.MESSAGE_HISTORY_STATE
 Total number of tables: 259
 
 Keyspace : tims
 Read Count: 208257486
 Read Latency: 7.655137315414438 ms
 Write Count: 2218218966
 Write Latency: 1.7825896304427324 ms
 Pending Flushes: 0
 Table: MESSAGE_HISTORY_STATE
 SSTab