I consume data from Kafka and insert them into Cassandra cluster using Java
API. The table has 4 keys including a timestamp based on millisecond. But
when executing the code, it just inserts 120 to 190 rows and ignores other
incoming data!
What parts can be the cause of the problem? Bad insert cod
I have 3 node Cassandra 3.11 cluster, one node 4GB memory, and others 2GB.
All of them have 4 CPU core. I want to insert data into the table and read
it to visualize at the san\me time. When I insert data using Flink
Cassandra Connector, in rate 200 inserts per sec, the reader application
can't con
Hi,
I have a string containing one of the primitive data types in Cassandra,
for example
String type = "bigint";
Now I want to create a corresponding DataType object for that.
I tried :
DataType.custom(type)
but it was useless. Notice I don't want to use the following solution
DataType.bigi
I created table using the command:
CREATE TABLE correlated_data (
processing_timestamp bigint,
generating_timestamp bigint,
data text,
PRIMARY KEY (processing_timestamp, generating_timestamp)
) WITH CLUSTERING ORDER BY (generating_timestamp DESC);
When I get data using the command
;
>
> And then read:
> select * from desc_test;
>
> id | name
> -+--
> fgh | bcd
> fgh | abc
> fgh | aaa
> abc | bcd
> abc | abc
> abc | aaa
>
> (6 rows)
>
>
> You can see that the data is properly ordered in descending m
Hi,
I want to run cqlsh query on cassandra table using IN
SELECT * from data WHERE nid = 'value' AND mm IN (201905,201904) AND
tid = 'value2' AND ts >= 155639466 AND ts <= 155699946 ;
The nid and mm columns are partition key and the ts is clustering key.
The problem is cassandra