Unless it is a 0.8.1 RC or beta
On Fri, Jul 1, 2011 at 12:57 PM, Jonathan Ellis wrote:
> This isn't 2818 -- (a) the 0.8.1 protocol is identical to 0.8.0 and
> (b) the whole cluster is on the same version.
>
> On Thu, Jun 30, 2011 at 9:35 PM, aaron morton
> wrote:
> > This seems to be a known is
This isn't 2818 -- (a) the 0.8.1 protocol is identical to 0.8.0 and
(b) the whole cluster is on the same version.
On Thu, Jun 30, 2011 at 9:35 PM, aaron morton wrote:
> This seems to be a known issue related
> to https://issues.apache.org/jira/browse/CASSANDRA-2818 e.g. https://issues.apache.org/
Hello;
I would like to run cassandra inside beanstalk;
http://aws.amazon.com/elasticbeanstalk/
along with the distributed client application;
this blog advises against it
http://www.evidentsoftware.com/embedding-cassandra-within-tomcat-for-testing/
is it really so?
This seems to be a known issue related to
https://issues.apache.org/jira/browse/CASSANDRA-2818 e.g.
https://issues.apache.org/jira/browse/CASSANDRA-2768
There was some discussion on the IRC list today, driftx said the simple fix was
a full cluster restart. Or perhaps a rolling restart with the
cassandra.in.sh is old skool 0.6 series, 0.7 series uses cassandra-env.sh. The
packages put it in /etc/cassandra.
This works for me at the end of cassandra-env.sh
JVM_OPTS="$JVM_OPTS -Dpasswd.properties=/etc/cassandra/passwd.properties"
JVM_OPTS="$JVM_OPTS -Daccess.properties=/etc/cassandra/ac
I'm interested. :)
On Thu, Jun 30, 2011 at 11:44 AM, Daniel Doubleday
wrote:
> Hi all - or rather devs
>
> we have been working on an alternative implementation to the existing row
> cache(s)
>
> We have 2 main goals:
>
> - Decrease memory -> get more rows in the cache without suffering a huge
Repair doesn't compact. Those are different processes already.
maki
On 2011/07/01, at 7:21, A J wrote:
> Thanks all !
> In other words, I think it is safe to say that a node as a whole can
> be made consistent only on 'nodetool repair'.
>
> Has there been enough interest in providing anti-ent
We had a visitor from Intel a month ago.
One question from him was "What could you do if we gave you a server 2 years
from now that had 16TB of memory"
I went Eh... using Java?
2 years is maybe unrealistic, but you can already get some quite acceptable
prices even on servers in the 100GB
On Thu, Jun 30, 2011 at 5:27 PM, Jonathan Ellis wrote:
> On Thu, Jun 30, 2011 at 3:47 PM, Edward Capriolo
> wrote:
> > Read repair does NOT repair tombstones.
>
> It does, but you can't rely on RR to repair _all_ tombstones, because
> RR only happens if the row in question is requested by a clie
It would be helpful if this was automated some how.
ok, I kind of found the magic bullet , but you can only use it to shoot your
enemy close really close range :)
for read path, the thrift API already limits the output to a list of
columns, so it does not make sense to use maps in the internal operations.
plus the return CF on the read path is not
Thanks all !
In other words, I think it is safe to say that a node as a whole can
be made consistent only on 'nodetool repair'.
Has there been enough interest in providing anti-entropy without
compaction as a separate operation (nodetool repair does both) ?
On Thu, Jun 30, 2011 at 5:27 PM, Jonat
On Thu, Jun 30, 2011 at 3:47 PM, Edward Capriolo wrote:
> Read repair does NOT repair tombstones.
It does, but you can't rely on RR to repair _all_ tombstones, because
RR only happens if the row in question is requested by a client.
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder o
Hi all,
I have upgraded all my cluster to 0.8.1. Today one of the disks in one
of the nodes died. After replacing the disk I tried running repair, but
this message appears:
INFO [manual-repair-bdb4055a-d370-4d2a-a1dd-70a7e4fa60cf] 2011-06-30
20:36:25,085 AntiEntropyService.java (line 179) Exclud
Found the fix myself, and wanted to share the resolution. Documentation
states that the "cassandra.in.sh" file needs to be updated with the
following values, if the properties files exist in the directory I've
stipulated:
JVM_OPTS="$JVM_OPTS -Dpasswd.properties=/etc/cassandra/passwd.properties"
J
As I understand, it has to do with a node being up but missing the delete
message (remember, if you apply the delete at CL.QUORUM, you can have almost
half the replicas miss it and still succeed). Imagine that you have 3 nodes A,
B, and C, each of which has a column 'foo' with a value 'bar'. The
On Thu, Jun 30, 2011 at 4:25 PM, A J wrote:
> I am little confused of the reason why nodetool repair has to run
> within GCGraceSeconds.
>
> The documentation at:
> http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair
> is not very clear to me.
>
> How can a delete be 'unforgo
I am little confused of the reason why nodetool repair has to run
within GCGraceSeconds.
The documentation at:
http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair
is not very clear to me.
How can a delete be 'unforgotten' if I don't run nodetool repair? (I
understand that if
For your Consistency case, it is actually an ALL read that is needed,
not an ALL write. ALL read, with what ever consistency level of write
that you need (to support machines dyeing) is the only way to get
consistent results in the face of a failed write which was at > ONE that
went to one node, b
Hi,
I am encountering an error while trying to set up simple authentication in a
test environment.
BACKGROUND
(1) Cassandra Version: ReleaseVersion: 0.7.2-0ubuntu4~lucid1
(2) OS Level: Linux cassandra1 2.6.32-32-server #62-Ubuntu SMP Wed Apr 20
22:07:43 UTC 2011 x86_64 GNU/Linux
2 node cl
The CQL drivers are all still sitting on top of the execute_cql_query
Thrift API method for now.
On Wed, Jun 29, 2011 at 2:12 PM, wrote:
>
> Someone asked a while ago whether Cassandra was vulnerable to injection
> attacks:
>
> http://stackoverflow.com/questions/5998838/nosql-injection-php-phpc
I am working on Cassandra for last 4 weeks and am trying to load large amount
of data.I am trying to use the
Bulk loading technique but am not clear with the process.Could some explain
the process for the bulk load?
Also Is the new bulk loading utility discussed in the previous posts
available?
On Thu, Jun 30, 2011 at 12:44 PM, Daniel Doubleday wrote:
> Hi all - or rather devs
>
> we have been working on an alternative implementation to the existing row
> cache(s)
>
> We have 2 main goals:
>
> - Decrease memory -> get more rows in the cache without suffering a huge
> performance penalty
Hi all - or rather devs
we have been working on an alternative implementation to the existing row
cache(s)
We have 2 main goals:
- Decrease memory -> get more rows in the cache without suffering a huge
performance penalty
- Reduce gc pressure
This sounds a lot like we should be using the new
Here's my understanding of things ... (this applies only for the regular heap
implementation of row cache)
> Why Cassandra does not cache a row that was requested few times?
What does the cache capacity read. Is it > 0?
> What the ReadCount attribute in ColumnFamilies indicates and why it rema
thanks.
but then the client application has the responsibility to sort the 3
segments (assuming that I need to order the "user browsing history" in the
example), I guess the total time would not be significantly different. also
this results in 3 times more seeks while the original way needs only
It should of course be noted that how hard it is to load balance depends a
lot on your dataset
Some datasets load balances reasonably well even when ordered and use of the
OPP is not a big problem at all (on the contrary) and in quite a few use
cases with current HW, read performance really is
The reason to break it up is that the information will then be on
different servers, so you can have server 1 spending time retrieving row
1, while you have server 2 retrieving row 2, and server 3 retrieving row
3... So instead of getting 3000 things from one server, you get 1000
from 3 servers in
Hi,
I am encountering an error while trying to set up simple authentication in a
test environment.
*BACKGROUND*
*Cassandra Version: ReleaseVersion: 0.7.2-0ubuntu4~lucid1*
*OS Level: Linux cassandra1 2.6.32-32-server #62-Ubuntu SMP Wed Apr 20
22:07:43 UTC 2011 x86_64 GNU/Linux*
*2 node cluster*
P
fixed in 0.8.1. https://issues.apache.org/jira/browse/CASSANDRA-2727
On Thu, Jun 30, 2011 at 3:09 AM, Markus Mock wrote:
> Hello,
> I am running into the following problem: I am running a single node
> cassandra setup (out of the box so to speak) and was trying out the code
> in apache-cassandr
I think I'll do the former, thanks!
On Wed, Jun 29, 2011 at 11:16 PM, aaron morton wrote:
> How about get_slice() with reversed == true and count = 1 to get the
> highest time UUID ?
>
> Or you can also store a column with a magic name that have the value of the
> timeuuid that is the current me
Hi,
I am running Cassandra 0.7.4 and I monitor the nodes using JConsole.
I am trying to figure out the location Cassandra read the returned rows
and there are few strange things...
1. I am reading few rows (using Hector) and the
org.apache.cassandra.db.ColumnFamilies...ReadCount
remains 0
what i want is that i get the records in the same order wich they were inserted.
how can i get this using any type of comparator type
if there is a code java for this it can be useful.
De : aaron morton
À : user@cassandra.apache.org
Envoyé le : Mardi 28 Juin 201
Hello,
I am running into the following problem: I am running a single node
cassandra setup (out of the box so to speak) and was trying out the code
in apache-cassandra-0.8.0-src/examples/hadoop_word_count.
The bin/word_count_setup seems to work fine as cassandra-cli reports that
there are 1000 r
How far behind is Brisk from the Cassandra release cycle? If 0.8.1 of
Cassandra was released yesterday, when ( if it isn't already ) will
the Brisk distribution implement 0.8.1?
-sd
--
Sasha Dolgy
sasha.do...@gmail.com
35 matches
Mail list logo