Thank you Steve - once I have the key, how do I get to a node?

After reading some of the documentation, it looks like the load-balancing-policy below *is* a token aware policy.  Perhaps writes need to be done with QUORUM; I don't know how long Cassandra will take to make sure replicas are consistent when doing ONE for all writes.  So if the propagation has not taken place and a node doesn't have the data and is the first to 'be asked' the client will get no data?

-Joe

On 12/2/2020 2:09 PM, Steve Lacerda wrote:
If you can determine the key, then you can determine which nodes do and do not have the data. You may be able to glean a bit more information like that, maybe one node is having problems, versus entire cluster.

On Wed, Dec 2, 2020 at 9:32 AM Joe Obernberger <joseph.obernber...@gmail.com <mailto:joseph.obernber...@gmail.com>> wrote:

    Clients are using an application.conf like:

    datastax-java-driver {
      basic.request.timeout = 60 seconds
      basic.request.consistency = ONE
      basic.contact-points = ["172.16.110.3:9042
    <http://172.16.110.3:9042>", "172.16.110.4:9042
    <http://172.16.110.4:9042>", "172.16.100.208:9042
    <http://172.16.100.208:9042>", "172.16.100.224:9042
    <http://172.16.100.224:9042>", "172.16.100.225:9042
    <http://172.16.100.225:9042>", "172.16.100.253:9042
    <http://172.16.100.253:9042>", "172.16.100.254:9042
    <http://172.16.100.254:9042>"]
      basic.load-balancing-policy {
            local-datacenter = datacenter1
      }
    }

    So no, I'm not using a token aware policy.  I'm googling that
    now...cuz I don't know what it is!

    -Joe

    On 12/2/2020 12:18 PM, Carl Mueller wrote:
    Are you using token aware policy for the driver?

    If your writes are one and your reads are one, the propagation
    may not have happened depending on the coordinator that is used.

    TokenAware will make that a bit better.

    On Wed, Dec 2, 2020 at 11:12 AM Joe Obernberger
    <joseph.obernber...@gmail.com
    <mailto:joseph.obernber...@gmail.com>> wrote:

        Hi Carl - thank you for replying.
        I am using Cassandra 3.11.9-1

        Rows are not typically being deleted - I assume you're
        referring to Tombstones.  I don't think that should be the
        case here as I don't think we've deleted anything here.
        This is a test cluster and some of the machines are small
        (hence the one node with 128 tokens and 14.6% - it has a lot
        less disk space than the other nodes).  This is one of the
        features that I really like with Cassandra - being able to
        size nodes based on disk/CPU/RAM.

        All data is currently written with ONE.  All data is read
        with ONE.  I can replicate this issue at will, so can try
        different things easily.  I tried changing the read process
        to use QUORUM and the issue still takes place.  Right now I'm
        running a 'nodetool repair' to see if that helps.  Our
        largest table 'doc' has the following stats:

        Table: doc
        SSTable count: 28
        Space used (live): 113609995010
        Space used (total): 113609995010
        Space used by snapshots (total): 0
        Off heap memory used (total): 225006197
        SSTable Compression Ratio: 0.37730474570644196
        Number of partitions (estimate): 93641747
        Memtable cell count: 0
        Memtable data size: 0
        Memtable off heap memory used: 0
        Memtable switch count: 3712
        Local read count: 891065091
        Local read latency: NaN ms
        Local write count: 7448281135
        Local write latency: NaN ms
        Pending flushes: 0
        Percent repaired: 0.0
        Bloom filter false positives: 988
        Bloom filter false ratio: 0.00001
        Bloom filter space used: 151149880
        Bloom filter off heap memory used: 151149656
        Index summary off heap memory used: 38654701
        Compression metadata off heap memory used: 35201840
        Compacted partition minimum bytes: 104
        Compacted partition maximum bytes: 3379391
        Compacted partition mean bytes: 3389
        Average live cells per slice (last five minutes): NaN
        Maximum live cells per slice (last five minutes): 0
        Average tombstones per slice (last five minutes): NaN
        Maximum tombstones per slice (last five minutes): 0
        Dropped Mutations: 8174438

        Thoughts/ideas?  Thank you!

        -Joe

        On 12/2/2020 11:49 AM, Carl Mueller wrote:
        Why is one of your nodes only at 14.6% ownership? That's
        weird, unless you have a small rowcount.

        Are you frequently deleting rows? Are you frequently writing
        rows at ONE?

        What version of cassandra?



        On Wed, Dec 2, 2020 at 9:56 AM Joe Obernberger
        <joseph.obernber...@gmail.com
        <mailto:joseph.obernber...@gmail.com>> wrote:

            Hi All - this is my first post here.  I've been using
            Cassandra for
            several months now and am loving it.  We are moving from
            Apache HBase to
            Cassandra for a big data analytics platform.

            I'm using java to get rows from Cassandra and very
            frequently get a
            java.util.NoSuchElementException when iterating through
            a ResultSet.  If
            I retry this query again (often several times), it
            works.  The debug log
            on the Cassandra nodes show this message:
            org.apache.cassandra.service.DigestMismatchException:
            Mismatch for key
            DecoratedKey

            My cluster looks like this:

            Datacenter: datacenter1
            =======================
            Status=Up/Down
            |/ State=Normal/Leaving/Joining/Moving
            --  Address         Load       Tokens Owns (effective) 
            Host
            ID                               Rack
            UN  172.16.100.224  340.5 GiB  512 50.9%
            8ba646ac-2b33-49de-a220-ae9842f18806  rack1
            UN  172.16.100.208  269.19 GiB  384 40.3%
            4e0ba42f-649b-425a-857a-34497eb3036e  rack1
            UN  172.16.100.225  282.83 GiB  512 50.4%
            247f3d70-d13b-4d68-9a53-2ed58e01a63e  rack1
            UN  172.16.110.3    409.78 GiB  768 63.2%
            0abea102-06d2-4309-af36-a3163e8f00d8  rack1
            UN  172.16.110.4    330.15 GiB  512 50.6%
            2a5ae735-6304-4e99-924b-44d9d5ec86b7  rack1
            UN  172.16.100.253  98.88 GiB  128 14.6%
            6b528b0b-d7f7-4378-bba8-1857802d4f18  rack1
            UN  172.16.100.254  204.5 GiB  256 30.0%
            87d0cb48-a57d-460e-bd82-93e6e52e93ea  rack1

            I suspect this has to do with how I'm using consistency
            levels?
            Typically I'm using ONE.  I just set the
            dclocal_read_repair_chance to
            0.0, but I'm still seeing the issue.  Any help/tips?

            Thank you!

            -Joe Obernberger


            
---------------------------------------------------------------------
            To unsubscribe, e-mail:
            user-unsubscr...@cassandra.apache.org
            <mailto:user-unsubscr...@cassandra.apache.org>
            For additional commands, e-mail:
            user-h...@cassandra.apache.org
            <mailto:user-h...@cassandra.apache.org>


        
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.avg.com_email-2Dsignature-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Demailclient&d=DwMDaQ&c=adz96Xi0w1RHqtPMowiL2g&r=R58SsZ6FLB8iCRFGJzNOH0d2HRPVtaWKKj5fzuMiGlo&m=Fyv9e8-h-x9SK5jhrHZ0E8GuNrgtlMqrzqMWJPRf6dc&s=LpYSwEkia1rRuqN2D9BD7zWhq-f4KX3JgbvVN3yEeDI&e=>
                Virus-free. www.avg.com
        
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.avg.com_email-2Dsignature-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Demailclient&d=DwMDaQ&c=adz96Xi0w1RHqtPMowiL2g&r=R58SsZ6FLB8iCRFGJzNOH0d2HRPVtaWKKj5fzuMiGlo&m=Fyv9e8-h-x9SK5jhrHZ0E8GuNrgtlMqrzqMWJPRf6dc&s=LpYSwEkia1rRuqN2D9BD7zWhq-f4KX3JgbvVN3yEeDI&e=>


        
<#m_1355843925756209451_m_1378452758220018548_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>



--
Steve Lacerda
e. steve.lace...@datastax.com <mailto:steve.lace...@datastax.com>
w. www.datastax.com <http://www.datastax.com>

Reply via email to