which exact version you saw this?

On Wed, May 15, 2019 at 12:03 PM keshava <keshava.kous...@gmail.com> wrote:

> I gave a try with changing java version , and it worked. seems to be some
> issue with java version of choice.
>
> On 10-May-2019 14:48, "keshava" <keshava.kous...@gmail.com> wrote:
>
>> i will try with changing java version.
>> w.r.t other point about hardware, i have this issue in multiple setups.
>> so i really doubt if hardware is playing spoilsport here
>>
>> On 10-May-2019 11:38, "Jeff Jirsa" <jji...@gmail.com> wrote:
>>
>>> It’s going to be very difficult to diagnose remotely.
>>>
>>> I don’t run or have an opinion on jdk7 but I would suspect the following:
>>>
>>> - bad hardware (dimm, disk, network card,  motherboard, processor in the
>>> order)
>>> - bad jdk7. I’d be inclined to upgrade to 8 personally, but rolling back
>>> to previous version may not be a bad idea
>>>
>>>
>>> You’re in a tough spot if this is spreading. I’d personally be looking
>>> to try to isolate the source and roll forward or backward as quickly as
>>> possible. I don’t really suspect a cassandra 2.1 but here but it’s possible
>>> I suppose. Take a snapshot now as you may need it to try to recover data
>>> later.
>>>
>>>
>>> --
>>> Jeff Jirsa
>>>
>>>
>>> On May 9, 2019, at 10:53 PM, keshava <keshava.kous...@gmail.com> wrote:
>>>
>>> yes we do have compression enabled using 
>>> "org.apache.cassandra.io.compress.LZ4Compressor"
>>> it is spreading..
>>> as the no of inserts increases it is spreading across.
>>> yes it did started with JDK and OS upgrade.
>>>
>>> Best regards  :)
>>> keshava Hosahalli
>>>
>>>
>>> On Thu, May 9, 2019 at 7:11 PM Jeff Jirsa <jji...@gmail.com> wrote:
>>>
>>>> Do you have compression enabled on your table?
>>>>
>>>> Did this start with the JDK upgrade?
>>>>
>>>> Is the compression spreading, or is it contained to the same % of
>>>> entries?
>>>>
>>>>
>>>>
>>>> On Thu, May 9, 2019 at 4:12 AM keshava <keshava.kous...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi , our application is running in to data corruption issue.
>>>>> Application uses cassandra 2.1.11 with datastax java driver version 2.1.9.
>>>>> So far all working fine. recently we changed our deployment environment to
>>>>> openjdk 1.7.191 (earlier it was 1.7.181) and centos 7.4 (earlier 6.8) This
>>>>> is randomly happening for one table. 1 in every 4-5 entries are getting
>>>>> corrupted writing new entries will return success and when i try to read i
>>>>> get data  not found .whenist all the data available in the table using
>>>>> cqlsh i see garbage entries like
>>>>>
>>>>>
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>>>>
>>>>> here is the output of the cqlsh
>>>>>
>>>>> cqlsh:ccp> select id from socialcontact;
>>>>>
>>>>> id
>>>>> ------------------------------------------------------------------------------>----------------------------------------------------
>>>>>
>>>>> 9BA31AE31000016A0000097C3F57FEF9 9BA10FB21000016A000000103F57FEF9
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>>>> 9BA3236C1000016A000009E63F57FEF9 9BA325361000016A000009FC3F57FEF9
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
>>>>> \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00.
>>>>>
>>>>> I did enabled the query tracing on both cassandra server and driver.
>>>>> didn't noticed any differences. looking for any advice's in resolving this
>>>>> issue
>>>>>
>>>>> PS i did tried upgrading cassandra to latest in 2.1 train but it
>>>>> didn't help
>>>>>
>>>>>

Reply via email to