Following up again on this. Any thoughts on this?
Does anyone have any thoughts on this?
On Tue, Jun 30, 2020 at 10:42 AM Check Peck wrote:
> We are trying to remove two columns in a table with 3 and make them UDT
> instead of having them as columns. So we came up with two options below. I
> wanted to understand if there is any d
We are trying to remove two columns in a table with 3 and make them UDT
instead of having them as columns. So we came up with two options below. I
wanted to understand if there is any difference between these two UDT in
the Cassandra database?
*One option is:*
> CREATE TYPE test_type (
>
I have a scylla table as shown below:
cqlsh:sampleks> describe table test;
CREATE TABLE test (
client_id int,
when timestamp,
process_ids list,
md text,
PRIMARY KEY (client_id, when) ) WITH CLUSTERING ORDER BY (when DESC)
AND bloom_f
And then from datastax java driver, I can use. Am I right?
To Read:
row.getLong();
To write
boundStatement.setLong()
On Wed, Dec 7, 2016 at 6:50 PM, Varun Barala
wrote:
> use `bigint` for long.
>
>
> Regards,
> Varun Barala
>
> On Thu, Dec 8, 2016 at 10:32 AM, Check Pec
What is the CQL data type I should use for long? I have to create a column
with long data type. Cassandra version is 2.0.10.
CREATE TABLE storage (
key text,
clientid int,
deviceid long, // this is wrong I guess as I don't see long in CQL?
PRIMARY KEY (topic, partition)
I have a table like this in Cassandra-
CREATE TABLE DATA_HOLDER (USER_ID TEXT, RECORD_NAME TEXT, RECORD_VALUE
BLOB, PRIMARY KEY (USER_ID, RECORD_NAME));
I want to count distinct USER_ID in my above table? Is there any way I can
do that?
My Cassandra version is:
[cqlsh 4.1.1 | Cassandra
Sending again as I didn't got any response on this.
Any thoughts?
On Fri, Feb 27, 2015 at 8:24 PM, Check Peck wrote:
> I have a Cassandra table like this -
>
> create table user_record (user_id text, record_name text, record_value
> blob, primary key (user_id, record_na
I have a Cassandra table like this -
create table user_record (user_id text, record_name text, record_value
blob, primary key (user_id, record_name));
What is the best way to extract all the user_id from this table? As of now,
I cannot change my data model to do this exercise so I need to fin
I am trying to design a table in Cassandra in which I will have multiple
JSON String for a particular client id.
abc123 - jsonA
abc123 - jsonB
abcd12345 - jsonC
My query pattern is going to be -
Give me all JSON String for a particular client id.
Gi
On Tue, Sep 23, 2014 at 3:41 PM, DuyHai Doan wrote:
> now - 15 mins
Can I run like this in CQL using cqlsh?
SELECT * FROM client_data WHERE client_id = 1 and last_modified_date >= now
- 15 mins
When I ran the above query I got an error on my cql client -
Bad Request: line 1:81 no viable alt
is possible to request a "range" of data according to the
> last_modified_date but you still need to provide the client_id , the
> partition key, in any case
>
>
> On Wed, Sep 24, 2014 at 12:23 AM, Check Peck
> wrote:
>
>> I have a table structure like be
I have a table structure like below -
CREATE TABLE client_data (
client_id int,
consumer_id text,
last_modified_date timestamp,
PRIMARY KEY (client_id, last_modified_date, consumer_id)
)
I have a query pattern like this - Give me everything for what has changed
wit
your table partiton key = test_id, client_name = first clustering
> column, record_data = second clustering column
>
>
> On Fri, Sep 19, 2014 at 5:41 PM, Check Peck
> wrote:
>
>> I am trying to use wide rows concept in my data modelling design for
>> Cassandra
I am trying to use wide rows concept in my data modelling design for
Cassandra. We are using Cassandra 2.0.6.
CREATE TABLE test_data (
test_id int,
client_name text,
record_data text,
creation_date timestamp,
last_modified_date timestamp,
PRIMARY KEY (test_i
I have a Cassandra cluster version as -
cqlsh:dataks> show version;
[cqlsh 2.3.0 | Cassandra 2.0.6 | CQL spec 3.0.0 | Thrift protocol
19.39.0]
And I have a table like this -
CREATE TABLE data_test (
valid_id int,
data_id text,
client_name text,
creation_date t
test data
> set.
>
> More broadly, it seems like you would benefit from either deltas (only
> retrieve new data) or something like paging (only retrieve currently
> relevant data), although its really difficult to say without more
> information.
>
> On Wed, Sep 17, 2014 at 1
I have recently started working with Cassandra. We have cassandra cluster
which is using DSE 4.0 version and has VNODES enabled. We have a tables
like this -
Below is my first table -
CREATE TABLE customers (
customer_id int PRIMARY KEY,
last_modified_date timeuuid,
customer
We have cassandra cluster in three different datacenters (DC1, DC2 and DC3)
and we have 10 machines in each datacenter. We have few tables in cassandra
in which we have less than 100 records.
What we are seeing - some tables are out of sync between machines in DC3 as
compared to DC1 or DC2 when we
I have our application code deployed in two data centers, DC1 and DC2 and
Cassandra nodes are also in DC1 and DC2 making a single cluster.
Our application servers in DC1 are communicating to DC1 Cassandra nodes
which I verified with "netstat -a | grep 9042"
But somehow internally DC1 Cassandra n
We have around 36 node Cassandra cluster and we have three Datacenters.
Each datacenter have 12 node.
We already have data flowing in Cassandra now and we cannot wipe out all
our data now.
Considering this - what is the right way to rename the cluster name without
any or minimal impact?
Just to add, nobody should be able to read and write into our Cassandra
database through any API *or any CQL client as well *only our team should
be able to do that.
On Fri, Apr 4, 2014 at 11:29 PM, Check Peck wrote:
> Thanks Mark. But what about Cassandra database? I don't want an
Open the file and update the username and password values under the
> [cassandra] section:
>
> [cassandra]
> username =
> seed_hosts =
> api_port =
> password =
>
> After changing properties in this file, restart OpsCenter for the changes
> to take effect.
>
>
> M
Hi All,
We would like to secure our Cassandra database. We don't want anybody to
read/write on our Cassandra database leaving our team members only.
We are using Cassandra 1.2.9 in Production and we have 36 node Cassandra
cluster. 12 in each colo as we have three datacenters.
But we would lik
Hi Guys,
I have couple of question on Datastax C++ driver.. Not related to this
particular post as nobody is replying to my original email thread.. And in
this email thread I saw people talking about Datastax C++ driver.
Not sure whether you might be able to help me or not but trying my luck -
W
...). I suppose the C++ driver will have
> similar class method. For the app in DC1 provide only nodes in DC1 as
> contact points.
>
> regards
>
>
> On Tue, Mar 4, 2014 at 6:47 AM, Check Peck wrote:
>
>> I have couple of question on Datastax C++ driver.
>>
>>
I have couple of question on Datastax C++ driver.
We have 36 nodes Cassandra cluster. 12 nodes in DC1, 12 nodes in DC2, 12
nodes in DC3 datacenters.
And our application code is also in three datacenters- 11 node in DC1, 11
node in DC2, 11 node in DC3 datacenter.
So my question is if the applicat
I am working on a project in which I am supposed to store the snappy
compressed data in Cassandra, so that when I retrieve the same data from
Cassandra, it should be snappy compressed in memory and then I will
decompress that data using snappy to get the actual data from it.
I am having a byte arr
28 matches
Mail list logo