Hello,




I have an issue with a <ErrorMessage code=0000 [Server error]
message="java.lang.NullPointerException"> when I query a table with static
fields (without where clause) with Cassandra 2.1.8 / 2 nodes clusters.



No more indication in the log :

ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,549 QueryMessage.java:132 -
Unexpected error during query

java.lang.NullPointerException: null

ERROR [SharedPool-Worker-1] 2015-08-18 10:39:02,550 ErrorMessage.java:251 -
Unexpected exception during request

java.lang.NullPointerException: null





The scenario was :

1) loading data inside the table with spark (~12 million rows)

2) Make some deletes with the primary keys and use the static fields to
keep a certain state for each partition.



The null pointer exception occurs when I query all the table after I made
some deletions.



I observed that :

- Before delete statement the table is perfectly readable

- It's repeatable  (I achieved to isolate ~20 delete statements that create
a null pointer exception  when they are executed by cqlsh)

- it occurs only  with some rows (nothing special in these rows compared to
others)

- Didn't succeed to repeat the problem with the problematic rows inside a
toy table

- repair/compact and scrub on each node before and after the deletes
statements didn't change anything (always the null pointer exception after
the delete)

- Maybe related with static columns ?



The table structure is :

CREATE TABLE my_table (

    pk1 text,

    pk2 text,

    ck1 timestamp,

    ck2 text,

    ck3 text,

    valuefield text,

    staticField1 text static,

    staticField2 text static,

    PRIMARY KEY ((pk1, pk2), ck1, ck2, ck3)

) WITH CLUSTERING ORDER BY (pk1 DESC, pk2 ASC, ck1 ASC)

    AND bloom_filter_fp_chance = 0.01

    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'

    AND compaction = {'class':
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}

    AND compression = {'sstable_compression':
'org.apache.cassandra.io.compress.LZ4Compressor'}

    AND dclocal_read_repair_chance = 0.1

    AND default_time_to_live = 0

    AND gc_grace_seconds = 0

    AND max_index_interval = 2048

    AND memtable_flush_period_in_ms = 0

    AND min_index_interval = 128

    AND read_repair_chance = 0.0

    AND speculative_retry = '99.0PERCENTILE';









Is someone already met this issue or has an idea to solve/investigate this
exceptions ?





Thank you





Regards





--

Hervé

Reply via email to