Hi,
We’ve got a legacy application on Perl 5.8, don’t want to upgrade the server
for fear of something breaking. However for redundancy, It would be great if I
could update the app so it writes to both the existing database, and cassandra.
The Perl drivers appear to require perl 5.10+
http
Hi,
I am trying to setup multi-node Cassandra cluster (2 nodes). I am
using apache-cassandra-2.0.4. I am able to start Cassandra on the seed
node. But, when I am trying to start it on the other node it starts and
fails in few seconds. I can see the following in my error log:
"ERROR 03:23:56,915 E
Hi All,
We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
1.2.10). Probably after upgradesstable (but it's only a guess,
because we noticed problem few weeks later), some rows became
tombstoned. They just disappear from results of queries. After
inverstigation I've noticed, that
Hi Sundeep,
Can you please confirm, are you configuring two nodes in different Datacenters?
If you are configuring on single datacenter with two nodes,then please change
the endpoint_snitch from RackInferringSnitch to SimpleSnitch and restart the
clusters.
Regards,
Chiru
On 03-Feb-2014, at
Same problem here. But in my case the same configuration does start up when I
just retry it a second time.
Some more info on my specific configuration :
Environment : linux, cassandra 2.0.4, 3 node, embedded, byte ordered, LCS
When I add a node to the existing 3 node cluster I sometimes get the
Thank you Chiru for the reply. I am configuring single datacenter. I
changed it to SimpleSnitch. However, I am getting the same error.
-Sundeep
On Mon, Feb 3, 2014 at 3:58 AM, Chiranjeevi Ravilla wrote:
> Hi Sundeep,
>
> Can you please confirm, are you configuring two nodes in different
> Data
Hello Sundeep
It seems that in both configs of your nodes you are using the same hosname
as seeds value.
You have to enumerate all nodes in your cluster.
Best wishes,
Ilya
On Feb 3, 2014 10:47 AM, "Sundeep Kambhampati"
wrote:
> Hi,
>
> I am trying to setup multi-node Cassandra cluster (2 nodes
Hi Ilya,
On 03/02/14 10:49, Ilya Sviridov wrote:
Hello Sundeep
It seems that in both configs of your nodes you are using the same hosname as
seeds value.
You have to enumerate all nodes in your cluster.
not so! If all nodes N1, N2, ... use the same node N0 as a seed, then by
gossiping with
Hi ,
I have a 4 node cassandra cluster with one node marked as seed node. When i
checked the data directory of seed node , it has two folders
/keyspace/columnfamily.
But sstable db files are not available.the folder is empty.The db files are
available in remaining nodes.
I want to know th
I'm guessing its just a coincident.. As far as I know, seeds have nothing
to do with where the data should be located.
I think there could be couple of reasons why you wouldn't see SSTables on a
specific column family folder, these are some of them:
- You're using a few distinct keys which non of t
Hi Paul,
I think some of the dependencies, like Snappy also require perl 5.10. If its
not too difficult to get it to work in perl 5.8 it would be very much
appreciated. If its a lot of work I wouldn’t want to waste your time, after all
who still depends on perl 5.8? (:
(I can always resort to
Dear Cassandra list,
We are experiencing the following distibution of our tokens:
StateAddress Load Tokens Owns Host ID
Rack
UN 10.180.199.188 108.72 GB 263 0.0%
f2
The Cassandra team is pleased to announce the release of Apache Cassandra
version 1.2.14.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
http://cassan
Hi all,
I am running C* 1.2.13 with Vnodes at around 1 TB / node. I just
noticed that one of my larger LCS CFs (300-400 GB/Node) is showing a
droppable tombstone ration of between 23 and 28% on my nodes. I did not
indicate a value to be used in my table creation so I assume its using the
def
I was able to send SafePointStatistics to another log file via the
additional JVM flags and recently noticed a pause of 9.3936600 seconds.
Here are the log entries:
GC Log file:
---
2014-01-31T07:49:14.755-0500: 137460.842: Total time for which application
threads were stopped: 0.1
if you are using < 2.0.4, then you are hitting
https://issues.apache.org/jira/browse/CASSANDRA-6527
On Mon, Feb 3, 2014 at 2:51 AM, olek.stas...@gmail.com
wrote:
> Hi All,
> We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
> 1.2.10). Probably after upgradesstable (but it's o
Hi Frank,
The "9391" under RevokeBias is the number of milliseconds spent
synchronising on the safepoint prior to the VM operation, i.e. the time it
took to ensure all application threads were stopped. So this is the
culprit. Notice that the time spent spinning/blocking for the threads we
are supp
Ok, but will upgrade "resurrect" my data? Or maybe I should perform
additional action to bring my system to correct state?
best regards
Aleksander
3 lut 2014 17:08 "Yuki Morishita" napisał(a):
> if you are using < 2.0.4, then you are hitting
> https://issues.apache.org/jira/browse/CASSANDRA-652
Does Cassandra put keys in key cache during the write path?
If I have two tables, the key cache for the first table was warmed up
nicely, and I want to insert millions rows in the second table, and there
is no read on the second table yet, will that affect cache hit ratio for
the first table?
Tha
Folks,
Has anyone out there used Cassandra 2.0 with Hadoop 2.x? I saw this
discussion on the Cassandra JIRA:
https://issues.apache.org/jira/browse/CASSANDRA-5201
but the fix referenced
(https://github.com/michaelsembwever/cassandra-hadoop) is for
Cassandra 1.2.
I put together a similar pat
On Mon, Feb 3, 2014 at 10:03 AM, Daning Wang wrote:
> Does Cassandra put keys in key cache during the write path?
>
No.
=Rob
On Mon, Feb 3, 2014 at 12:51 AM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
> 1.2.10). Probably after upgradesstable (but it's only a guess,
> because we noticed problem few weeks later), some rows became
> tombst
Yes, I haven't run sstableloader. The data loss apperared somwhere on the line:
1.1.7-> 1.2.10 -> upgradesstable -> 2.0.2 -> normal operations ->2.0.3
normal operations -> now
Today I've noticed that oldest files with broken values appear during
repair (we do repair once a week on each node). Maybe
On Mon, Feb 3, 2014 at 1:02 PM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> Today I've noticed that oldest files with broken values appear during
> repair (we do repair once a week on each node). Maybe it's the repair
> operation, which caused data loss?
Yes, unless you added or re
2014-02-03 Robert Coli :
> On Mon, Feb 3, 2014 at 1:02 PM, olek.stas...@gmail.com
> wrote:
>>
>> Today I've noticed that oldest files with broken values appear during
>> repair (we do repair once a week on each node). Maybe it's the repair
>> operation, which caused data loss?
>
>
> Yes, unless yo
On Mon, Feb 3, 2014 at 2:17 PM, olek.stas...@gmail.com <
olek.stas...@gmail.com> wrote:
> No, i've done repair after upgrade sstables. In fact it was about 4
> weeks after, because of bug:
>
If you only did a repair after you upgraded SSTables, when did you have an
opportunity to hit :
https://i
On Mon, Feb 3, 2014 at 8:52 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:
>
> It's possible that this is a JVM issue, but if so there may be some
> remedial action we can take anyway. There are some more flags we should
> add, but we can discuss that once you open a ticket. If yo
Experiencing socket timeout errors in one DC in most of the nodes in multi dc
cluster. Here is error. Client is having intermittent high response time issues
in this DC. DC1 does not experience any timeout issues, but DC2 does
though. This error started occurring recently and repeats cons
On Sun, Feb 2, 2014 at 10:48 AM, Sholes, Joshua <
joshua_sho...@cable.comcast.com> wrote:
> I had a node in my 8-node production 1.2.8 cluster have a serious
> problem and need to be removed and rebuilt. However, after doing nodetool
> removenode and then bootstrapping a new node on the same IP
29 matches
Mail list logo