Hi Ken,
the system_auth keyspace should be repaired. However the system keyspace
uses a local replication strategy and there is no point is repairing it.
Thanks,
Prem
On Tue, Aug 11, 2015 at 3:01 PM, K F wrote:
> Hi,
>
> I have a question in general with regards to repairs on system related
> k
1) There are ways to connect two VPCs using VPN.
2) About the connectivity using public IP. Can you ping the one public ip
from another one in a different region.
If ping works, please check port connectivity using telnet. You can start a
temp server on a port using netcat. If connectivity fails, y
We had this issue when using hive on cassandra.
We had to replace the thrift jar with our own patches.
On Fri, Aug 14, 2015 at 5:27 PM, K F wrote:
> While using sstableloader in 2.0.14 we have discovered that setting
> the thrift_framed_transport_size_in_mb to 16 in cassandra.yaml doesn't
> hono
The EC2 nodes must be in the default VPC.
create a ring in the VPC in region B. Use VPC peering to connect the
default and the region B VPC.
The two rings should join the existing one. Alter the replication strategy
to network replication so that the data is replicated to the new rings.
Repair the
? Because
> I would be very surprise default VPC must be used.
>
> On Sat, Aug 15, 2015 at 2:50 AM, Prem Yadav wrote:
>
>>
>> The EC2 nodes must be in the default VPC.
>>
>> create a ring in the VPC in region B. Use VPC peering to connect the
>> default and
The MySQL is there just to save the state of things. I suppose it very
lightweight. Why not just install mysql on one of the nodes or a VM
somewhere.
On Sun, Aug 16, 2015 at 3:39 PM, John Wong wrote:
> Sorry i meant integration with Cassandra (based on the docs by default it
> suggests MySQL)
>
Hi,
Is it better to use Spark APIs to do join on cassandra tables or should we
use SPARK-SQL.
We have been struggling with SPARK-SQL as we need to do multiple large
table joins and there is always failure.
I tried to do joins using the API like this:
val join1 =
sc.cassandraTable("Keyspace1","tabl
if it is cassandra 2.0+,
you can implement your trigger. Please check the following link:
http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-0-prototype-triggers-support
Thanks,
Prem
On Sun, Nov 22, 2015 at 4:48 PM, Harikrishnan A wrote:
> Trying for second time to get some insights to
Just letting the community know that I just passed the Cassandra architect
certification with flying colors :).
Have to say I learnt a lot from this forum.
Thanks,
Prem
Can you run the trace again for the query "select * " without any
conditions and see if you are getting results for tnt_id=5?
On Mon, Nov 23, 2015 at 1:23 PM, Ramon Rockx wrote:
> Hello Oded and Carlos,
>
> Many thanks for your tips. I modified the consistency level in cqlsh, but
> with no suc
is there and looks fine, probably there's a problem managing
> varints somewhere in the read path.
>
> Regarfds
>
>
> Carlos Alonso | Software Engineer | @calonso <https://twitter.com/calonso>
>
> On 23 November 2015 at 13:55, Ramon Rockx wrote:
>
>> He
Hi,
I am trying to understand different use cases related to using UUID as the
partition key. I am sure I am missing something trivial and will be
grateful and you can help me understand this.
When do you use the UUID as the primary key? What can be a use case?
Since it is unique, how do you quer
).
>
> Thanks,
> Jay
>
> On Mon, Nov 23, 2015 at 11:08 AM, Prem Yadav wrote:
>
>> Hi,
>>
>> I am trying to understand different use cases related to using UUID as
>> the partition key. I am sure I am missing something trivial and will be
>> grateful
userid(uuid).
> Please refer to www.killrvideo.com website. It is a great place to
> understand how a web application is built on Cassandra.
>
> Thanks,
> Jay
>
> On Mon, Nov 23, 2015 at 11:18 AM, Prem Yadav wrote:
>
>> Thanks Jay. Now this is great while creating the use
*If your cluster does not use vnodes*
Are you using vnodes now?
On Mon, Nov 23, 2015 at 10:55 PM, Robert Wille wrote:
> I’m wanting to upgrade from 2.0 to 2.1. The upgrade instructions at
> http://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgradeCassandraDetails.html
> has
> the follo
Compaction is done to improve the reads. The compaction process is very CPU
intensive and it can make writes perform slow. Writes are also CPU-bound.
On Wed, Nov 25, 2015 at 11:12 AM, wrote:
> Hi all,
>
>
>
> Does compaction throughput impact write performance ?
>
>
>
> Increasing the value of
Dan,
As part of upgrade, did you upgrade the sstables?
Sent from mobile. Please excuse typos
On 28 Sep 2017 17:45, "Dan Kinder" wrote:
> I should also note, I also see nodes become locked up without seeing that
> Exception. But the GossipStage buildup does seem correlated with gossip
> activity
Hi,
this is an issue that has happened a few times. We are using DSE 4.0
One of the Cassandra nodes is detected as dead by the opscenter even though
I can see the process is up.
the logs show heap space error:
INFO [RMI TCP Connection(18270)-172.31.49.189] 2014-09-24 08:31:05,340
StorageService.
Well its not the Linux OOM killer. The system is running with all default
settings.
Total memory 7GB- Cassandra gets assigned 2GB
2 core processors.
Two rings with 3 nodes in each ring.
On Wed, Sep 24, 2014 at 9:53 PM, Michael Shuler
wrote:
> On 09/24/2014 11:32 AM, Prem Yadav wrote:
>
&
BTW, thanks Michael.
I am surprised why I didn't search for Cassandra oom before.
I got some good links that discuss that. Will try to optimize and see how
it goes.
On Wed, Sep 24, 2014 at 10:27 PM, Prem Yadav wrote:
> Well its not the Linux OOM killer. The system is running with all
Hi,
this is an issue we have a faced a couple times now.
Every ones in a while Opscenter throws an error that repair service failed
die to errors. In the logs we can see multiple lines like:
Repair task (,
(-6964720218971987043L, -6963882488374905088L), set([tables])) timed out
after 3600 second
Increase the read CL to quorum and you should get correct results.
How many nodes do you have in the cluster and what is the replication
factor for the keyspace?
On Mon, Mar 30, 2015 at 7:41 PM, Benyi Wang wrote:
> Create table tomb_test (
>guid text,
>content text,
>range text,
>
Look into sqoop. I believe using sqoop you can transfer data between C*
clusters. I haven't tested it though.
other option is to write a program to read from one cluster and write the
required data to another.
On Tue, Apr 14, 2015 at 12:27 PM, skrynnikov_m wrote:
> Hello!!!
> Need to migrate dat
Hi,
We have an existing cluster consisting of 3 DCs. Authentication is enabled.
I am trying to add a new DC. I followed the steps mentioned at:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_add_dc_to_cluster_t.html
Bit, I still can't login to any of the nodes in the new DC us
ndroid
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
> --
> *From*:"Prem Yadav"
> *Date*:Sun, 7 Jun, 2015 at 8:19 pm
> *Subject*:Add new DC to cluster
>
> Hi,
> We have an existing cluster consisting of 3 DCs. A
Hi,
I have been spending some time looking into whether large files(>100mb) can
be stores in Cassandra. As per Cassandra faq:
*"Currently Cassandra isn't optimized specifically for large file or BLOB
storage. However, files of around 64Mb and smaller can be easily stored in
the database without s
ssemination,
> copying or other use of, or taking any action in reliance upon, this
> information by persons or entities other than the intended recipient is
> strictly prohibited.
>
>
>
>
> From: prem yadav
> Reply-To:
> Date: Tuesday, March 18, 2014 at 1:41 PM
>
Hi,
I have a 3 node cassandra test cluster. The nodes have 4 GB total memory/2
cores. Cassndra run with all default settings.
But, the cassandra process keeps getting killed due to OOM. Cassandra
version in use is 1.1.9.
here are the settings in use:
compaction_throughput_mb_per_sec: 16
row_cache_
Its Oracle jdk 1.6.
Robert, any fix that you know of which went into 1.2.15 for this particular
issue?
On Sat, Mar 22, 2014 at 4:50 PM, Robert Coli wrote:
> On Sat, Mar 22, 2014 at 7:48 AM, prem yadav wrote:
>
>> But, the cassandra process keeps getting killed due to OOM. Cassandr
Michael, no memory constraints. System memory is 4 GB and Cassandra run on
default.
On Sat, Mar 22, 2014 at 5:32 PM, prem yadav wrote:
> Its Oracle jdk 1.6.
> Robert, any fix that you know of which went into 1.2.15 for this
> particular issue?
>
>
> On Sat, Mar 22, 2014 at 4:
rading to Cassandra 2, jdk 1.7, and default parameters fixed it.
>
> I think the jdk change was the key for my similarly small memory cluster.
>
> ml
>
>
>
> On Sat, Mar 22, 2014 at 1:36 PM, prem yadav wrote:
>
>> Michael, no memory constraints. System memory is 4 GB and Ca
The output of ps waux . Also there is no load on cluster. None
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 19224 1076 ?Ss Mar19 0:01 /sbin/init
root 2 0.0 0.0 0 0 ?SMar19 0:00 [kthreadd]
root
Also we use datastax. The version cassandra-1.1.9 doesn't work with java 7
On Sat, Mar 22, 2014 at 9:09 PM, prem yadav wrote:
> The output of ps waux . Also there is no load on cluster. None
>
> USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
> ro
the nodes die *without * being under any load. Completely idle.
And 4 GB system memory is not low. or is it?
I have tried tweaking the overcommit memory. Tried disabling it,
under-committing and over-committing.
I also reduced rpc threads min and max. Will try other setting from that
link Michael
Thanks Robert. That seems to be the issue. however the fix mentioned there
doesn't work. I downgraded Java to jdk6_37 and that seems to have done the
trick. Thanks for pointing me to that Jira ticket.
On Mon, Mar 24, 2014 at 6:48 PM, Robert Coli wrote:
> On Mon, Mar 24, 2014 at 4:11
Hi,
in another thread, I has mentioned that we had issue with Cassandra getting
killed by kernel due to OOM. Downgrading to jdk6_37 seems to have fixed it.
However, even now, after every couple of hours, the nodes are showing a
spike in memory usage.
For ex: on a 8GB ram machine, once the usage re
Could you paste output of:
> $ ps -p `jps | awk '/CassandraDaemon/ {print $1}'` uww
> please?
>
>
> On Wed, Mar 26, 2014 at 5:20 PM, prem yadav wrote:
>
>> Hi,
>> in another thread, I has mentioned that we had issue with Cassandra
>> getting killed
; http://www.datastax.com/documentation/cassandra/2.0/mobile/cassandra/install/installJnaRHEL.html?
>
>
>
> Don
>
>
>
> *From:* prem yadav [mailto:ipremya...@gmail.com]
> *Sent:* Wednesday, March 26, 2014 10:36 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: memory
; On Wed, Mar 26, 2014 at 8:35 AM, prem yadav wrote:
>
>> Thanks Robert. That seems to be the issue. however the fix mentioned
>> there doesn't work. I downgraded Java to jdk6_37 and that seems to have
>> done the trick. Thanks for pointing me to that Jira ticket.
>&
I have noticed that too. But even though dse installs opsndjk, it never
gets used. So you should be ok.
On Thu, Mar 27, 2014 at 8:29 PM, Jon Forrest wrote:
> I'm using Oracle Java 7 on a CentOS 6.5 system.
> Running 'java -version' works correctly and shows
> I'm running Oracle Java. I don't wa
Though cassandra can work but to me it looks like you could use a
persistent queue for example (rabbitMQ) to implement this. All your workers
can subscribe to a queue.
In fact, why not just MySQL?
On Thu, Apr 3, 2014 at 11:44 PM, Jan Algermissen wrote:
> Hi,
>
> maybe someone knows a nice solut
Oh ok. I thought you did not have a cassandra cluster already. Sorry about
that.
On Fri, Apr 4, 2014 at 11:42 AM, Jan Algermissen wrote:
>
> On 04 Apr 2014, at 11:18, prem yadav wrote:
>
> Though cassandra can work but to me it looks like you could use a
> persistent qu
you can specify multiple data directories in cassandra.yaml.
ex:
data_file_directories:
- /var/lib.cass1
- /var/lib/cass2
-/
On Mon, Apr 7, 2014 at 12:10 PM, Jan Kesten wrote:
> Hi Hari,
>
> C* will use your entire space - that is something one should monitor.
> Depending on your choose
Hi,
I am now to cassandra and even though I am not familiar to the
implementation and architecture of cassandra, Is struggle with how to best
design the schema.
We have an application where we need to store huge amounts of data. Its a
per user storage where we store a lot of data for each user and
a/
>
> Cassandra uses one thread-per-client for remote procedure calls. For a
> large number of client connections, this can cause excessive memory usage
> for the thread stack. Connection pooling on the client side is highly
> recommended.
>
> --
> Thanks,
> Sergey
&
Are the virtual machines? The last time I had this issues was because of
VMWare "ballooning".
If not, what versions of Cassandra and Java are you using?
On Mon, Apr 28, 2014 at 6:30 PM, Gary Zhao wrote:
> BTW, the CPU usage on this node is pretty high, but data size is pretty
> small.
>
> PID
Hi Jabbar,
with vnodes, scaling up should not be a problem. You could just add a
machines with the cluster/seed/datacenter conf and it should join the
cluster.
Scaling down has to be manual where you drain the node and decommission it.
thanks,
Prem
On Wed, May 21, 2014 at 12:35 PM, Jabbar Azam
I would think its because of the index and filter files. Also the
additional data which gets added because of serialization. Also, since
SStables are only deleted after the compaction us finished, it might be
possible that when you checked, the intermediate SSTables were not yet
deleted.
However,
Hi,
in the last week week, we saw at least two emails about dead node
replacement. Though I saw the documentation about how to do this, i am not
sure I understand why this is required.
Assuming replication factor is >2, if a node dies, why does it matter? If
we add a new node is added, shouldn't
Hi,
I have seen this in a lot of replies that Cassandra is not designed for
this and that. I don't want to sound rude, i just need some info about this
so that i can compare it to technologies like hbase, mongo,
elasticsearch, solr,
etc.
1) what is Cassandra designed for. Heave writes yes. So is H
t opinion
>
>
>
> http://khangaonkar.blogspot.com/2014/06/apache-cassandra-things-to-consider.html
>
> regards
>
>
>
> On Fri, Jul 4, 2014 at 7:37 AM, Prem Yadav wrote:
>
>> Hi,
>> I have seen this in a lot of replies that Cassandra is not designed for
>
a.
>
> So if you’re looking for a high-scale out, high-throughput transactional
> system then Cassandra may make sense for you. If you’re looking for
> something more geared towards analytics (so few bulk writes, many reads),
> then something in the Hadoop space may make sense.
>
> C
use bleeding edge technologies. They'd
> better off using a classical RDBMS solution that fit perfectly their load
>
> Hope that helps
>
> Duy Hai DOAN
>
>
>
> On Fri, Jul 4, 2014 at 9:31 PM, Prem Yadav wrote:
>
>> Thanks Manoj. Great post for those who alread
Please post the full exception.
On Fri, Jul 11, 2014 at 1:50 PM, Ruchir Jha wrote:
> We have a 12 node cluster and we are consistently seeing this exception
> being thrown during peak write traffic. We have a replication factor of 3
> and a write consistency level of QUORUM. Also note there is
HI,
are there any cluster specific prerequisites for running spark on Cassandra?
I create two DCs. DC1 and DC2. DC1 had two cassandra nodes with vnodes.
I create two nodes in DC2 with murmu partitioning and set num_token: 1.
Enabled Hadoop and Spark and started DSE.
I can verify that hadoop start
55 matches
Mail list logo