jsvc is not very flexible. Check out wrapper software out. we swear by it.
http://wrapper.tanukisoftware.com/doc/english/download.jsp
On Jun 17, 2011, at 2:52 AM, Ken Brumer wrote:
>
> Anton Belyaev gmail.com> writes:
>
>>
>> I guess it is not trivial to modify the package to make it use J
All:
Can you find some exception from the last sentence? Would cassandra crash when
memory is not enough? There are some other application run with cassandra, the
other application may use large memory.
发件人: Donna Li
发送时间: 2011年6月17日 9:58
收件人: user@ca
The short answer to the problem you saw is monitor the disk space. Also monitor
client side logs for errors. Running out of commit log space does not stop the
node from doing reads, so it can still be considered up.
One nodes view of it's own UP'ness is not as important as the other nodes (or
What do you mean by crash ?
If there was some sort of error in cassandra (including java running out of
heap space) it will appear in the logs. Are there any error messages in the log.
If there was some sort of JVM error it will be outputted to std error and
probably end up on std out / conso
I have a query:
I have my Cassandra server running on my local machine and it has loaded
Cassandra specific settings from
apache-cassandra-0.8.0-src/apache-cassandra-0.8.0-src/conf/cassandra.yaml
Now If I am writing a java program to connect to this server why do I need to
provide a new Cassan
What type of environment? We had issues with our cluster on 0.7.6-2 ... The
messages you see and highlighted, from what I recall aren't bad ... they are
good. Investigating our crash, it turns out that the OS killed our
Cassandra process and this was found in /var/log/messages
Since then, I have
Hi Vivek,
When I write client code in Java, using Hector, I don't specify a
cassandra.yaml ... I specify the host(s) and keyspace I want to
connect to. Alternately, I specify the host(s) and create the
keyspace if the one I would like to use doesn't exist (new cluster for
example). At no point d
Hi Sasha,
This is what I am trying . I can sense this is happening with JDBCDriver stuff.
public static void main(String[] args) {
try {
java.sql.Connection con = null;
Class.forName("org.apache.cassandra.cql.jdbc.CassandraDr
sounds like
https://issues.apache.org/jira/browse/CASSANDRA-2694
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 17 Jun 2011, at 20:10, Vivek Mishra wrote:
> Hi Sasha,
> This is what I am trying . I can sense this is happening wi
Thanks Aaron. But I tried it with 0.8.0 release only!
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Friday, June 17, 2011 1:55 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra.yaml
sounds like
https://issues.apache.org/jira/browse/CASSANDRA-2694
Cheers
-
Aaron
Hi all,
Anyone experiencing this..?
I noticed one of my 7.6-2 nodes had inexplicable and consistently high cpu
usage. Checking the log I found that there was a some kind of SSTable
corruption that was stopping a bunch of files from compacting (first trace
copied below).
I then tried scrub (befor
From: Vivek Mishra
Sent: Friday, June 17, 2011 3:25 PM
To: user@cassandra.apache.org
Subject: getFieldValue()
Hi,
I was looking into getFieldValue method for SuperColumn and SlicePredicate
apis.
It looks to me slight confusing as underlying APIs are returning Object and
developer can only be
Since using cassandra 0.8, I see the following warning:
WARN 12:05:59,807 MemoryMeter uninitialized (jamm not specified as java agent);
assuming liveRatio of 10.0. Usually this means cassandra-env.sh disabled jamm
because you are using a buggy JRE; upgrade to the Sun JRE instead
I'am using Sun
My commit logs sometimes eat too much disk space. I see that the oldest is
about a day old, so it's clearly pruning already, but is there some way I can
clear them out manually without breaking stuff, assuming that all the
transactions they describe have been completed?
Marcus
smime.p7s
Descri
> My commit logs sometimes eat too much disk space. I see that the oldest is
> about a day old, so it's clearly pruning already, but is there some way I can
> clear them out manually without breaking stuff, assuming that all the
> transactions they describe have been completed?
Don't manually r
Scrub apparently dies because it cannot acquire a file descriptor. Scrub does
not correctly closes files
(https://issues.apache.org/jira/browse/CASSANDRA-2669)
so that may be part of why that happens. However, a simple fix is probably to
raise up the file descriptor limit.
--
Sylvain
On Fri, Jun
Correct. But that will not solve issue of data colocation(data locality) ?
From: Sasha Dolgy [mailto:sdo...@gmail.com]
Sent: Thursday, June 16, 2011 8:47 PM
To: user@cassandra.apache.org
Subject: Re: Querying superColumn
Have 1 row with employee info for country/office/division, each column an
Write two records ...
1. [department1] = { "Vivek" : "India" }
2. [India] = { "Vivek" : "department1" }
1. [department1] = { "Vivs" : "USA" }
2. [USA] = { "Vivs" : "department1" }
Now you can query a single row to display all employees in USA or all
employees in department1 ... employee move
As far as scrub goes that could be it. I'm already running unlimited file
handles though so ulimit not answer unfortunately
Dominic
On 17 June 2011 12:12, Sylvain Lebresne wrote:
> Scrub apparently dies because it cannot acquire a file descriptor. Scrub
> does
> not correctly closes files
> (ht
On Fri, Jun 17, 2011 at 1:51 PM, Dominic Williams
wrote:
> As far as scrub goes that could be it. I'm already running unlimited file
> handles though so ulimit not answer unfortunately
Are you sure ? How many file descriptors are open on the system when
you get that
scrub exception ? If you reall
Unfortunately I shutdown that node and anyway lsof wasn't installed.
But $ulimit gives
unlimited
On 17 June 2011 13:00, Sylvain Lebresne wrote:
> On Fri, Jun 17, 2011 at 1:51 PM, Dominic Williams
> wrote:
> > As far as scrub goes that could be it. I'm already running unlimited file
> > handles
I haven't done it yet, but when I researched how to make
geo-diverse/failover DCs, I figured I'd have to do something like RF=6,
strategy = {DC1=3, DC2=3}, and LOCAL_QUORUM for reads/writes. This gives
you an "ack" after 2 local nodes do the read/write, but the data eventually
gets distributed to
1. the right way to right that is to just say struct.name, struct.value, etc
2. why are you writing raw thrift instead of using Hector?
On Fri, Jun 17, 2011 at 5:03 AM, Vivek Mishra
wrote:
>
>
>
>
> From: Vivek Mishra
> Sent: Friday, June 17, 2011 3:25 PM
> To: user@cassandra.apache.org
> Subject
One question regarding point 2: Why should we always use Hector, Thrift is not
that bad?
Von meinem iPhone gesendet
Am 17.06.2011 um 17:12 schrieb Jonathan Ellis :
> 1. the right way to right that is to just say struct.name, struct.value, etc
> 2. why are you writing raw thrift instead of using
A good example for what I understand in using Hector / pycassa / etc.
is, if you wanted to implement connection pooling, you would have to
craft your own solution, versus implementing the solution that is
tested and ready to go, provided by Hector. Thrift doesn't provide
native connection pooling
If you don't get frustrated writing Thrift by hand you are a far, far
more patient man than I am.
It's tedious and error-prone to boot.
On Fri, Jun 17, 2011 at 10:30 AM, Markus Wiesenbacher | Codefreun.de
wrote:
> One question regarding point 2: Why should we always use Hector, Thrift is
> not
Thanks Jonathan. I assumed since each data center owned the full key
space that the first replica would be stored in the dc of the
coordinating node, the 2nd in another dc, and the 3rd+ back in the 1st
dc. But, are you saying that the first endpoint is selected regardless
of the location of t
I see ;)
Von meinem iPhone gesendet
Am 17.06.2011 um 17:55 schrieb Jonathan Ellis :
> If you don't get frustrated writing Thrift by hand you are a far, far
> more patient man than I am.
>
> It's tedious and error-prone to boot.
>
> On Fri, Jun 17, 2011 at 10:30 AM, Markus Wiesenbacher | Codefr
On 6/17/2011 7:26 AM, William Oberman wrote:
I haven't done it yet, but when I researched how to make
geo-diverse/failover DCs, I figured I'd have to do something like
RF=6, strategy = {DC1=3, DC2=3}, and LOCAL_QUORUM for reads/writes.
This gives you an "ack" after 2 local nodes do the read/wr
On Fri, Jun 17, 2011 at 12:07 PM, AJ wrote:
> Thanks Jonathan. I assumed since each data center owned the full key space
> that the first replica would be stored in the dc of the coordinating node,
> the 2nd in another dc, and the 3rd+ back in the 1st dc. But, are you saying
> that the first end
> What I don't like about NTS is I would have to have more replicas than I
> need. {DC1=2, DC2=2}, RF=4 would be the minimum. If I felt that 2 local
> replicas was insufficient, I'd have to move up to RF=6 which seems like a
> waste... I'm predicting data in the TB range so I'm trying to keep rep
+1 for this if it is possible...
On Fri, Jun 17, 2011 at 6:31 PM, Eric tamme wrote:
>> What I don't like about NTS is I would have to have more replicas than I
>> need. {DC1=2, DC2=2}, RF=4 would be the minimum. If I felt that 2 local
>> replicas was insufficient, I'd have to move up to RF=6 wh
Run two Cassandra clusters...
-Original Message-
From: Eric tamme [mailto:eta...@gmail.com]
Sent: Friday, June 17, 2011 11:31 AM
To: user@cassandra.apache.org
Subject: Re: Docs: Token Selection
> What I don't like about NTS is I would have to have more replicas than
> I need. {DC1=2,
+1 Yes, that is what I'm talking about Eric. Maybe I could write my
own strategy, I dunno. I'll have to understand more first.
On 6/17/2011 10:37 AM, Sasha Dolgy wrote:
+1 for this if it is possible...
On Fri, Jun 17, 2011 at 6:31 PM, Eric tamme wrote:
What I don't like about NTS is I wou
is there any way to remember the keys (rowId) inserted in cassandra database?
B.R
De : Jonathan Ellis
À : user@cassandra.apache.org
Cc : karim abbouh
Envoyé le : Mercredi 15 Juin 2011 18h05
Objet : Re: last record rowId
You're better served using UUIDs than nu
Hi All
I specified multiple hosts in seeds field when using cassandra-0.8
like this
seeds: "192.168.1.115","192.168.1.110","192.168.1.113"
But I am getting error that
hile parsing a block mapping
in "", line 106, column 13:
- seeds: "192.168.1.115","192.168. ...
have them all within a " " and not multiple " ", " "
for example:
seeds: "192.168.1.115, 192.168.1.110"
versus what you have...
On Fri, Jun 17, 2011 at 7:00 PM, Anurag Gujral wrote:
> Hi All
> I specified multiple hosts in seeds field when using cassandra-0.8
> like this
> seeds: "1
Even without lsof, you should be able to get the data from /proc/$pid
-ryan
On Fri, Jun 17, 2011 at 5:08 AM, Dominic Williams
wrote:
> Unfortunately I shutdown that node and anyway lsof wasn't installed.
> But $ulimit gives
> unlimited
>
> On 17 June 2011 13:00, Sylvain Lebresne wrote:
>>
>> On
On 6/17/2011 10:31 AM, Eric tamme wrote:
What I don't like about NTS is I would have to have more replicas than I
need. {DC1=2, DC2=2}, RF=4 would be the minimum. If I felt that 2 local
replicas was insufficient, I'd have to move up to RF=6 which seems like a
waste... I'm predicting data in the
Hi,
I'd like to learn how to set up a Brisk cluster with HA/DR in Amazon. Last
time I tried this a few months ago, it was tricky because we had to either
set up a VPN or hack the Cassandra source to get internode communications to
work across regions. But with v 0.8's new BriskSnitch or EC2Snitch,
Hi Jeremiah, can you give more details?
Thanks
On 6/17/2011 10:49 AM, Jeremiah Jordan wrote:
Run two Cassandra clusters...
-Original Message-
From: Eric tamme [mailto:eta...@gmail.com]
Sent: Friday, June 17, 2011 11:31 AM
To: user@cassandra.apache.org
Subject: Re: Docs: Token Selection
Yeah that would get the count (although I don't think you can see filenames
- or maybe I just don't know how). Unfortunately that node was shut down. I
then tried restarting with storage port 7001 to isolate as was quite toxic
for performance of cluster but it now get's OOM on restart.
If it's rea
Run two clusters, one which has {DC1:2, DC2:1} and one which is
{DC1:1,DC2:2}. You can't have both in the same cluster, otherwise it
isn't possible to tell where the data got written when you want to read
it. For a given key "XYZ" you must be able to compute which nodes it is
stored on ju
> Yes. But, the more I think about it, the more I see issues. Here is what I
> envision (Issues marked with *):
>
> Three or more dc's, each serving as fail-overs for the others with 1 maximum
> unavailable dc supported at a time.
> Each dc is a production dc serving users that I choose.
> Each d
On 6/17/2011 12:33 PM, Eric tamme wrote:
As i said previously, trying to build make cassandra treat things
differently based on some kind of persistent locality set it maintains
in memory .. or whatever .. sounds like you will be absolutely
undermining the core principles of how cassandra works.
On 6/17/2011 12:32 PM, Jeremiah Jordan wrote:
Run two clusters, one which has {DC1:2, DC2:1} and one which is
{DC1:1,DC2:2}. You can't have both in the same cluster, otherwise it
isn't possible to tell where the data got written when you want to read
it. For a given key "XYZ" you must b
Replication factor is defined per keyspace if i'm not mistaken. Can't
remember if NTS is per keyspace or per cluster ... if it's per
keyspace, that would be a way around it ... without having to maintain
multiple clusters just have multiple keyspaces ...
On Fri, Jun 17, 2011 at 9:23 PM, AJ
On 6/17/2011 1:27 PM, Sasha Dolgy wrote:
Replication factor is defined per keyspace if i'm not mistaken. Can't
remember if NTS is per keyspace or per cluster ... if it's per
keyspace, that would be a way around it ... without having to maintain
multiple clusters just have multiple keyspaces
Good day everyone!
I'm getting started with a new project and I'm thinking about using
Cassandra because of its distributed quality and because of its performance.
I'm using Java on the back-end. There are many many things being said about
the Java high level clients for Cassandra on the web. To
I'm using Hector. AFAIK its the only one that supports failover today.
On Fri, Jun 17, 2011 at 6:02 PM, Daniel Colchete wrote:
> Good day everyone!
> I'm getting started with a new project and I'm thinking about using
> Cassandra because of its distributed quality and because of its performance.
My team prefers Pelops. https://github.com/s7/scale7-pelops
It's had failover since 0.7.
http://groups.google.com/group/scale7/browse_thread/thread/19d441b7cd000de0/624257fe4f94a037
With respect to avoiding writing marshaling code yourself, I agree with the
OP that that is rather lacking with the
I've added some comments/questions inline...
Cheers,
--
Dan Washusen
On Saturday, 18 June 2011 at 8:02 AM, Daniel Colchete wrote:
> Good day everyone!
>
> I'm getting started with a new project and I'm thinking about using Cassandra
> because of its distributed quality and because of its perf
If by this you are obliquely referring to JDBC, I understand there is
a CQL JDBC driver under development
> new semantics on them that are neither Java's or Cassandra's, and I
>
Obviously thats going to support CQL, not SQL like existing JDBC drivers.
53 matches
Mail list logo