The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.0.3.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
http://cassand
The Cassandra team is pleased to announce the release of Apache Cassandra
version 1.2.12.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
http://cassan
Hi,
I'm working with Cassandra 1.2.2 and I have a question about nodetool
cleanup.
In the documentation , it's writted " Wait for cleanup to complete on one
node before doing the next"
I would like to know, why we can't perform a lot of cleanup in a same time
?
Thanks
Hello,
I am a newbie to the Cassandra world. I would like to know if its
possible for two different nodes to write to a single Cassandra node. I
have a packet collector software which runs in two different systems. I
would like both of them to write the packets to a single node(same keyspace
and
Hi Julien,
I hope I get this right :)
a repair will trigger a mayor compaction on your node which will take up
a lot of CPU and IO performance. It needs to do this to build up the
data structure that is used for the repair. After the compaction this is
streamed to the different nodes in order
Hello,
We recently experienced (pretty severe) data loss after moving our 4 node
Cassandra cluster from one EC2 availability zone to another. Our strategy
for doing so was as follows:
- One at a time, bring up new nodes in the new availability zone and
have them join the cluster.
- One
Yes, we saw this same behavior.
A couple of months ago, we moved a large portion of our data out of
Postgres and into Cassandra. The initial migration was done in a
"distributed" manner: we had 600 (or 800, can't remember) processes
reading from Postgres and writing to Cassandra in tight loops.
We have the same setup: one keyspace per client, and currently about 300
keyspaces. nodetool repair takes a long time, 4 hours with -pr on a single
node. We have a 4 node cluster with about 10 gb per node. Unfortunately,
we haven't been keeping track of the running time as keyspaces, or load,
i
Hi
I have migrated my DEV environment from 1.2.8 to 1.2.11 to finally move to
2.0.2, and prepare is 100 to 200 times slower, something that was sub
millisecond now is 150 ms. Other CQL operations are normal.
I am nor planning to move to 2,0.2 until I fix this. I do not see any warn
or error in
> I noticed when I gave the path directly to cassandra.yaml, it works fine.
> Can't I give the directory path here, as mentioned in the doc?
Documentation is wrong, the -Dcassandra.config param is used for the path of
the yaml file not the config directory.
I’ve emailed d...@datastax.com to let
If it’s just a test system nuke it and try again :)
Was there more than one node at any time ? Does nodetool status show only one
node ?
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
> However, for both writes and reads there was virtually no difference in the
> latencies.
What sort of latency were you getting ?
> I’m still not very sure where the current *write* bottleneck is though.
What numbers are you getting ?
Could the bottle neck be the client ? Can it send writes f
> Thanks, But I suppose it’s just for Debian? Am I right?
There are debian and rpm packages, and people deploy them or the binary
packages with with chef and similar tools.
It may be easier to answer your question if you describe the specific platform
/ needs.
cheers
-
Aaron
> Mr Coli What's the difference between deploy binaries and the binary package ?
> I upload the binary package on the Apache Cassandra Homepage, Am I wrong ?
Yes you can use the instructions here for the binary package
http://wiki.apache.org/cassandra/DebianPackaging
When you use the binary packa
Mr. Bottaro,
About how many column families are in your keyspaces? We have 28 per
keyspace.
Are you using Vnodes? We are and they are set to 256
What version of cassandra are you running. We are running 1.2.9
On Mon, Nov 25, 2013 at 11:36 AM, Christopher J. Bottaro <
cjbott...@academicworks.co
I’m trying to decide what noSQL database to use, and I’ve certainly decided
against mongodb due to its use of mmap. I’m wondering if Cassandra would also
suffer from a similar inefficiency with small documents. In mongodb, if you
have a large set of small documents (each much less than the 4KB p
I did some test and apparently the prepared statement is not cached at all,
in a loop (native protocol, datastax java driver, both 1.3 and 4.0) I
prepared the same statement 20 times and the elapsed time where almost
identical. I think it has something to do with CASSANDRA-6107 that was
implement
Blowing away the database does indeed seem to fix the problem, but it
doesn't exactly make me feel warm and cozy. I have no idea how the database
got screwed up, so I don't know what to avoid doing so that I don't have
this happen again on a production server. I never had any other nodes, so it
has
Here are some calculated 'latency' results reported by cassandra-stress when
asked to write 10M rows, i.e.
cassandra-stress -d , -n 1000
(we actually had cassandra-stress running in deamon mode for the below tests)
avg_latency
(percentile)
90
99
99.9
99.99
Write: 8 cores, 32 GB, 3-di
On Mon, Nov 25, 2013 at 3:35 PM, Robert Wille wrote:
> Blowing away the database does indeed seem to fix the problem, but it
> doesn't exactly make me feel warm and cozy. I have no idea how the database
> got screwed up, so I don't know what to avoid doing so that I don't have
> this happen again
On Mon, Nov 25, 2013 at 12:28 PM, John Pyeatt wrote:
> Are you using Vnodes? We are and they are set to 256
> What version of cassandra are you running. We are running 1.2.9
>
Vnode performance vis a vis repair is this JIRA issue :
https://issues.apache.org/jira/browse/CASSANDRA-5220
Unfortunat
Recently we had a strange thing happen. Altering schema (gc_grace_seconds) for
a column family resulted in a schema disagreement. 3/4 of nodes got it, 1/4
didn't. There was no partition at the time, nor was there multiple schema
updates issued. Going to the nodes with stale schema and trying to
It can be https://issues.apache.org/jira/browse/CASSANDRA-6369 fixed in
1.2.12/2.0.3
-M
"Shahryar Sedghi" wrote in message
news:cajuqix7_jvwbj7sx5p8hvmwy5od5ze7pbtv1y5ttga2aws6...@mail.gmail.com...
I did some test and apparently the prepared statement is not cached at all, in
a loop (native
23 matches
Mail list logo