- none
- dc: Cassandra encrypts the traffic between the data centers.
- rack: Cassandra encrypts the traffic between the racks.
regards
Neha
On Wed, Jan 6, 2016 at 12:48 PM, Singh, Abhijeet
wrote:
> Security is a very wide concept. What exactly do you want to achi
Congrats Prem !!!
Me too planning to take up the certificate.
Please, Provide tips?
regards
Neha
On Wed, Nov 25, 2015 at 10:36 PM, Ariel Weisberg wrote:
> Hi,
>
> Congratulations! I hope the certification brings good things for you.
>
> Regards,
> Ariel
>
>
> On Sun
,value: {'SOX'}});
INSERT INTO test_path3 (path_id, mdata ) VALUES ( '1', { key
:'applicable-security-policy',value: {'FOX'}});
*Can I query Something like....cqlsh:mykeyspace> SELECT * FROM test_path3
where mdata.value CONTAINS {'Mime'};SyntaxException: *
Thanks
regards
Neha
;FOX'};
2. Can I use composite column? any idea?
regards
neha
On Sat, Nov 14, 2015 at 9:21 PM, Jack Krupansky
wrote:
> You can only nest frozen collections and even then you can only access the
> full nested value, not individual entries within the nested map.
>
> So, in
Any Help?
On Tue, Nov 10, 2015 at 7:44 PM, Neha Dave wrote:
> How can we achieve Nested Collections in cassandra.
>
> My requirement :
> metadata map> ... Is it possible?
>
> Eg. 'mime-type' : 'MIME'
> 'Security' : {'SOX
est_path (path_id, metadata ) VALUES ( '2', { 'mime-type':
{values : {'Mime'}}
{'applicable-security-policy' : {'SOX','FOX'}} });
Query (which does not work) can be :
SELECT * from test_path where metadata CONTAINS {values: {'FOX'}, 'SOX'}} ;
OR
SELECT * from test_path where metadata CONTAINS {values: {'FOX'};
Thanks
Regards
Neha
Havent used it.. but u can try SSTaable Bulk Loader:
http://docs.datastax.com/en/cassandra/2.0/cassandra/tools/toolsBulkloader_t.html
regards
Neha
On Tue, Sep 15, 2015 at 11:21 AM, Ajay Garg wrote:
> Hi All.
>
> We have a schema on one Cassandra-node, and wish to duplicate the
> e
Try
>cqlsh
regards
Neha
On Mon, Sep 14, 2015 at 3:53 PM, Ajay Garg wrote:
> Hi All.
>
> We have setup a Ubuntu-14.04 server, and followed the steps exactly as
> per http://wiki.apache.org/cassandra/DebianPackaging
>
> Installation completes fine, Cassandra starts fine, h
Sachin,
Hope you are not using Cassandra 2.2 in production?
regards
Neha
On Tue, Sep 1, 2015 at 11:20 PM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:
> DSE 4.7 ships with Cassandra 2.1 for stability.
>
> All the best,
>
>
> [image: datastax_logo.png]
Hi,
Can you specify which version of Cassandra you are using?
Can you provide the Error Stack ?
regards
Neha
On Tue, Sep 1, 2015 at 2:56 AM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:
> or https://issues.apache.org/jira/browse/CASSANDRA-8611 perhaps
>
&g
up to 2.1.5
>> (in the 2.1.x series) are not considered stable.
>>
>>Regards,
>>
>> Carlos Juzarte Rolo
>> Cassandra Consultant
>>
>> Pythian - Love your data
>>
>> rolo@pythian | Twitter: cjrolo | Linkedin:
>> *linkedin.com/in/car
any help?
On Thu, Jul 2, 2015 at 6:18 AM, Neha Trivedi wrote:
> also:
> root@cas03:~# sudo service cassandra start
> root@cas03:~# lsof -n | grep java | wc -l
> 5315
> root@cas03:~# lsof -n | grep java | wc -l
> 977317
> root@cas03:~# lsof -n | grep java | wc -l
> 880240
also:
root@cas03:~# sudo service cassandra start
root@cas03:~# lsof -n | grep java | wc -l
5315
root@cas03:~# lsof -n | grep java | wc -l
977317
root@cas03:~# lsof -n | grep java | wc -l
880240
root@cas03:~# lsof -n | grep java | wc -l
882402
On Wed, Jul 1, 2015 at 6:31 PM, Neha Trivedi wrote
One of the column family has SStable count as under :
SSTable count: 98506
Can it be because of 2.1.3 version of cassandra..
I found this : https://issues.apache.org/jira/browse/CASSANDRA-8964
regards
Neha
On Wed, Jul 1, 2015 at 5:40 PM, Jason Wee wrote:
> nodetool cfstats?
>
> On W
Also you can monitor the number of sstables.
>
> C*heers
>
> Alain
>
> 2015-07-01 11:53 GMT+02:00 Neha Trivedi :
>
>> Thanks I will checkout.
>> I increased the ulimit to 10, but I am getting the same error, but
>> after a while.
>> regards
>>
Thanks I will checkout.
I increased the ulimit to 10, but I am getting the same error, but
after a while.
regards
Neha
On Wed, Jul 1, 2015 at 2:22 PM, Alain RODRIGUEZ wrote:
> Just check the process owner to be sure (top, htop, ps, ...)
>
>
> http://docs.datastax.com/en/c
Arun,
I am logging on to Server as root and running (sudo service cassandra start)
regards
Neha
On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi
wrote:
> Thanks Arun ! I will try and get back !
>
> On Wed, Jul 1, 2015 at 10:32 AM, Arun wrote:
>
>> Looks like you have too man
t;
>
> > On Jun 30, 2015, at 21:16, Neha Trivedi wrote:
> >
> > Hello,
> > I have a 4 node cluster with SimpleSnitch.
> > Cassandra : Cassandra 2.1.3
> >
> > I am trying to add a new node (cassandra 2.1.7) and I get the following
> error.
> >
>
Hello,
I have a 4 node cluster with SimpleSnitch.
Cassandra : Cassandra 2.1.3
I am trying to add a new node (cassandra 2.1.7) and I get the following
error.
ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94 -
JVM state determined to be unstable. Exiting forcefully due to:
test.log (first_name);
CREATE INDEX log_lastname_index ON test.log (last_name);
CREATE INDEX log_dob_index ON test.log (dob);
INSERT INTO log(id, first_name,last_name) VALUES ( 3, {'rob'},{'abbate'});
INSERT INTO log(id, first_name,last_name) VALUES ( 4, {'neha'},{
ct the old
> nodes to forget data for which they are no longer responsible). The data
> responsibility hasn't changed for any node, all nodes are still responsible
> for all data.
>
> On Mon, Apr 27, 2015 at 9:19 PM, Neha Trivedi
> wrote:
>
>> Thans Arun !
>
Thans Arun !
On Tue, Apr 28, 2015 at 9:44 AM, arun sirimalla wrote:
> Hi Neha,
>
>
> After you add the node to the cluster, run nodetool cleanup on all nodes.
> Next running repair on each node will replicate the data. Make sure you
> run the repair on one node at a time, be
n a node, then move to the next node.
Any other things to take care?
Thanks
Regards
neha
On Mon, Apr 27, 2015 at 9:45 PM, Eric Stevens wrote:
> It depends on why you're adding a new node. If you're running out of disk
> space or IO capacity in your 2 node cluster, then changing
Hi
We have a 2 Cluster Node with RF=2. We are planing to add a new node.
Should we change RF to 3 in the schema?
OR Just added a new node with the same RF=2?
Any other Best Practice that we need to take care?
Thanks
regards
Neha
> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>
> What is the consistency level ?
>
> Best Regardds,
> Kiran.M.K.
>
>
> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi
> wrote:
>
>> hi,
>>
>> What is the count of records in th
Thanks Sebastian, I will try it out.
But I am also curious why is the COPY command failing with Out of Memory
Error.
regards
Neha
On Tue, Apr 21, 2015 at 4:35 AM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:
> Blobs are ByteBuffer s it calls getBytes().toString:
&g
Does the nproc,nofile,memlock settings in
/etc/security/limits.d/cassandra.conf are set to optimum value ?
it's all default.
What is the consistency level ?
CL = Qurom
Is there any other way to export a table to CSV?
regards
Neha
On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk wrote:
hi,
What is the count of records in the column-family ?
We have about 38,000 Rows in the column-family for which we are
trying to export
What is the Cassandra Version ?
We are using Cassandra 2.0.11
MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
The Server is 8 GB.
regards
Neha
On
Hello all,
We are getting the OutOfMemoryError on one of the Node and the Node is
down, when we run the export command to get all the data from a table.
Regards
Neha
ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java (line
199) Exception in thread Thread[ReadStage
Thanks a lot Steve.
I suspected the same. I will definitely read.
regards
Neha
On Fri, Jan 23, 2015 at 11:22 PM, Steve Robenalt
wrote:
> Hi Neha,
>
> As far as I'm aware, 4GB of RAM is a bit underpowered for Cassandra even
> if there are no other processes on the same server
2.03 Driver (This is one I can change and try)
4. 64.4 GB HDD
5. Attached Memory and CPU information.
Regards
Neha
On Fri, Jan 23, 2015 at 6:50 AM, Steve Robenalt
wrote:
> I agree with Rob. You shouldn't need to change the read timeout.
>
> We had similar issues wit
due to memory?
2. Is it related to driver?
Initially when I was trying 15MB and it was throwing the same Exception but
after that it started working.
thanks
regards
neha
Hi Ajay,
1. you should have at least 2 Seed nodes as it will help, Node1 (only one
seed node) is down.
2. Check you should be using internal ip address in listen_address and
rpc_address.
On Mon, Jan 5, 2015 at 2:07 PM, Ajay wrote:
> Hi,
>
> I did the Cassandra cluster set up as below:
>
> Nod
Use 2.0.11 for production
On Wed, Dec 31, 2014 at 11:50 PM, Robert Coli wrote:
> On Wed, Dec 31, 2014 at 8:38 AM, Ajay wrote:
>
>> For my research and learning I am using Cassandra 2.1.2. But I see couple
>> of mail threads going on issues in 2.1.2. So what is the stable or popular
>> build for
t streaming throughput on your existing nodes to a lower number like 50
> or 25.
>
> On Tue, Dec 16, 2014 at 11:10 AM, Neha Trivedi
> wrote:
>>
>> Thanks Ryan.
>> So, as Jonathan recommended, we should have RF=3 with Three nodes.
>> So Quorum = 2 so, CL= 2 (or I
, dynamically without affecting my nodes ?
regards
Neha
On Tue, Dec 16, 2014 at 10:32 PM, Ryan Svihla wrote:
>
>
> CL quorum with RF2 is equivalent to ALL, writes will require
> acknowledgement from both nodes, and reads will be from both nodes.
>
> CL one will write to both re
every READ, it will read from both nodes and give back to client?
DOWNGRADERETRYPOLICY will downgrade the CL if a node is down?
Regards
Neha
On Wed, Dec 10, 2014 at 1:00 PM, Jonathan Haddad wrote:
>
> I did a presentation on diagnosing performance problems in production at
> the US &
3 because if you lose 1 server, you're on the edge
> of data loss.
>
>
> On Tue Dec 09 2014 at 7:19:32 PM Neha Trivedi
> wrote:
>
>> Hi,
>> We have Two Node Cluster Configuration in production with RF=2.
>>
>> Which means that the data is written
ed or I can manage with nodetool
utility?
3. Is is necessary to run repair weekly?
thanks
regards
Neha
Check if u have rpc_server = hsha .. Change it to sync and try ..
Sent from my iPhone
On Dec 4, 2014, at 3:55 PM, sinonim wrote:
Hi all,
We have the case of a cassandra cluster with nodes version 2.0.10, all in a
single EC2 region. We want to perform a rolling upgrade to version 2.1.2 but
t
Thanks Jens and Robert !!!
On Wed, Dec 3, 2014 at 2:20 AM, Robert Coli wrote:
> On Mon, Dec 1, 2014 at 7:10 PM, Neha Trivedi
> wrote:
>
>> No the old node is not defective. We Just want to separate out that
>> Server for testing.
>> And add a new node. (Present clu
No the old node is not defective. We Just want to separate out that Server
for testing.
And add a new node. (Present cluster has two Nodes and RF=2)
thanks
On Tue, Dec 2, 2014 at 12:04 AM, Robert Coli wrote:
> On Sun, Nov 30, 2014 at 10:15 PM, Neha Trivedi
> wrote:
>
>> I need
Hi,
I need to Add new Node and remove existing node.
Should I first remove the node and then add a new node or Add new node and
then remove existing node.
Which practice is better and things I need to take care?
regards
Neha
43 matches
Mail list logo