Clients are using an application.conf like:

datastax-java-driver {
  basic.request.timeout = 60 seconds
  basic.request.consistency = ONE
  basic.contact-points = ["172.16.110.3:9042", "172.16.110.4:9042", "172.16.100.208:9042", "172.16.100.224:9042", "172.16.100.225:9042", "172.16.100.253:9042", "172.16.100.254:9042"]
  basic.load-balancing-policy {
        local-datacenter = datacenter1
  }
}

So no, I'm not using a token aware policy.  I'm googling that now...cuz I don't know what it is!

-Joe

On 12/2/2020 12:18 PM, Carl Mueller wrote:
Are you using token aware policy for the driver?

If your writes are one and your reads are one, the propagation may not have happened depending on the coordinator that is used.

TokenAware will make that a bit better.

On Wed, Dec 2, 2020 at 11:12 AM Joe Obernberger <joseph.obernber...@gmail.com <mailto:joseph.obernber...@gmail.com>> wrote:

    Hi Carl - thank you for replying.
    I am using Cassandra 3.11.9-1

    Rows are not typically being deleted - I assume you're referring
    to Tombstones.  I don't think that should be the case here as I
    don't think we've deleted anything here.
    This is a test cluster and some of the machines are small (hence
    the one node with 128 tokens and 14.6% - it has a lot less disk
    space than the other nodes).  This is one of the features that I
    really like with Cassandra - being able to size nodes based on
    disk/CPU/RAM.

    All data is currently written with ONE.  All data is read with
    ONE.  I can replicate this issue at will, so can try different
    things easily.  I tried changing the read process to use QUORUM
    and the issue still takes place. Right now I'm running a 'nodetool
    repair' to see if that helps.  Our largest table 'doc' has the
    following stats:

    Table: doc
    SSTable count: 28
    Space used (live): 113609995010
    Space used (total): 113609995010
    Space used by snapshots (total): 0
    Off heap memory used (total): 225006197
    SSTable Compression Ratio: 0.37730474570644196
    Number of partitions (estimate): 93641747
    Memtable cell count: 0
    Memtable data size: 0
    Memtable off heap memory used: 0
    Memtable switch count: 3712
    Local read count: 891065091
    Local read latency: NaN ms
    Local write count: 7448281135
    Local write latency: NaN ms
    Pending flushes: 0
    Percent repaired: 0.0
    Bloom filter false positives: 988
    Bloom filter false ratio: 0.00001
    Bloom filter space used: 151149880
    Bloom filter off heap memory used: 151149656
    Index summary off heap memory used: 38654701
    Compression metadata off heap memory used: 35201840
    Compacted partition minimum bytes: 104
    Compacted partition maximum bytes: 3379391
    Compacted partition mean bytes: 3389
    Average live cells per slice (last five minutes): NaN
    Maximum live cells per slice (last five minutes): 0
    Average tombstones per slice (last five minutes): NaN
    Maximum tombstones per slice (last five minutes): 0
    Dropped Mutations: 8174438

    Thoughts/ideas?  Thank you!

    -Joe

    On 12/2/2020 11:49 AM, Carl Mueller wrote:
    Why is one of your nodes only at 14.6% ownership? That's weird,
    unless you have a small rowcount.

    Are you frequently deleting rows? Are you frequently writing rows
    at ONE?

    What version of cassandra?



    On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger
    <joseph.obernber...@gmail.com
    <mailto:joseph.obernber...@gmail.com>> wrote:

        Hi All - this is my first post here.  I've been using
        Cassandra for
        several months now and am loving it.  We are moving from
        Apache HBase to
        Cassandra for a big data analytics platform.

        I'm using java to get rows from Cassandra and very frequently
        get a
        java.util.NoSuchElementException when iterating through a
        ResultSet.  If
        I retry this query again (often several times), it works. 
        The debug log
        on the Cassandra nodes show this message:
        org.apache.cassandra.service.DigestMismatchException:
        Mismatch for key
        DecoratedKey

        My cluster looks like this:

        Datacenter: datacenter1
        =======================
        Status=Up/Down
        |/ State=Normal/Leaving/Joining/Moving
        --  Address         Load       Tokens       Owns (effective) 
        Host
        ID                               Rack
        UN  172.16.100.224  340.5 GiB  512          50.9%
        8ba646ac-2b33-49de-a220-ae9842f18806  rack1
        UN  172.16.100.208  269.19 GiB  384          40.3%
        4e0ba42f-649b-425a-857a-34497eb3036e  rack1
        UN  172.16.100.225  282.83 GiB  512          50.4%
        247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
        UN  172.16.110.3    409.78 GiB  768          63.2%
        0abea102-06d2-4309-af36-a3163e8f00d8  rack1
        UN  172.16.110.4    330.15 GiB  512          50.6%
        2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
        UN  172.16.100.253  98.88 GiB  128          14.6%
        6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
        UN  172.16.100.254  204.5 GiB  256          30.0%
        87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1

        I suspect this has to do with how I'm using consistency levels?
        Typically I'm using ONE.  I just set the
        dclocal_read_repair_chance to
        0.0, but I'm still seeing the issue.  Any help/tips?

        Thank you!

        -Joe Obernberger


        ---------------------------------------------------------------------
        To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
        <mailto:user-unsubscr...@cassandra.apache.org>
        For additional commands, e-mail:
        user-h...@cassandra.apache.org
        <mailto:user-h...@cassandra.apache.org>


    
<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>
        Virus-free. www.avg.com
    
<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient>


    <#m_1378452758220018548_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

Reply via email to