Cassandra "graveyard" sounds like a lot of thombstones that will be
compacted during normal compact.
You can trigger that manually using the nodetool.
2012/3/28 Erik Forsberg
> Hi!
>
> I was trying out the "truncate" command in cassandra-cli.
>
> http://wiki.apache.org/**cassandra/CassandraCli0
Yes, that is one of the possible solutions to your problem.
When you want to retrieve only the skills of a particular row just get the
columns with as start value "skill:".
A suggestion to your example might be to use a ~ in stead of : as
separator. A tilde is used less often in standard sentence
Hmm. I thought that Cassandra would encode the composite column without the
colon and that it was only there for illustration purposes, so the
suggestion to use ~ is confusing. Are there some docs you can point me to?
Also, after some reading, it seems to me that it is not even possible to
have a
If you use the CompositeColumn it does, but it looked to me in your example
you just used the simple utf8-based solution. My apologies for the
confusion.
2012/3/28 Ben McCann
> Hmm. I thought that Cassandra would encode the composite column without
> the colon and that it was only there for illu
Hi all,
I've noticed a change in behavior between 0.8.10 and 1.0.8 when it comes
to sstable2json output and major compactions. Is this a bug or intended
behavior?
With 1.0.8:
create keyspace ks;
use ks;
create column family foo;
set foo[1][1] = 1;
nodetool -h localhost flush
sstable2json foo
I'm leaning towards storing serialized JSON at the moment. It's too bad
Cassandra doesn't have some better way of storing collections or
document-oriented data (e.g. a JsonType queryable with CQL).
On Wed, Mar 28, 2012 at 1:19 AM, R. Verlangen wrote:
> If you use the CompositeColumn it does, b
yes - but anyway in your example you need "key range quey" and that
requires OOP, right?
On Tue, Mar 27, 2012 at 5:13 PM, Guy Incognito wrote:
> multiget does not require OPP.
>
> On 27/03/2012 09:51, Maciej Miklas wrote:
>
> multiget would require Order Preserving Partitioner, and this can lea
RAID0 would help me use more efficiently the total disk space available at each
node, but tests have shown that under write load it behaves much worse than
using separate data dirs, one per disk.
there are different strategies how RAID0 splits reads, also changing io
scheduler and filesystem
Radim,
We are only deleting columns. *Rows are never deleted.*
We are continually adding new columns that are then deleted. * Existing
columns (deleted or otherwise) are never updated.
*
Ross*
*
On 28 March 2012 13:51, John Laban wrote:
> (Radim: I'm assuming you mean "do not delete already d
Dne 28.3.2012 13:14, Ross Black napsal(a):
Radim,
We are only deleting columns. *Rows are never deleted.*
i suggest to change app to delete rows. try composite keys.
Hi Radim,
I am hunting for what I believe is a bug in Cassandra and tombstone
handling that may be triggered by our particular application usage.
I appreciate your attempt to help, but without you actually knowing what
our application is doing and why, your advice to change our application is
poin
On 03/28/2012 02:04 PM, Radim Kolar wrote:
RAID0 would help me use more efficiently the total disk space
available at each node, but tests have shown that under write load it
behaves much worse than using separate data dirs, one per disk.
there are different strategies how RAID0 splits reads,
On Wednesday 28 of March 2012, Igor wrote:
> I'm also trying to evaluate different strategies for RAID0 as drive for
> cassandra data storage. If I need 2T space to keep node tables, which
> drive configuration is better: 1T x 2drives or 500G x 4drives?
Having _similar_ family of HDDs 4x smaller
This email was sent to you by Thomson Reuters, the global news and information
company. Any views expressed in this message are those of the individual
sender, except where the sender specifically states them to be the views of
Thomson Reuters.
I'm also trying to evaluate different strategies for RAID0 as drive
for cassandra data storage. If I need 2T space to keep node tables,
which drive configuration is better: 1T x 2drives or 500G x 4drives?
more drives is always better.
Which stripe size is optimal?
smaller stripe sizes are
We upgraded to 1.0.8, and looks the problem is gone.
Thanks for your help,
Daning
On Sun, Mar 25, 2012 at 9:54 AM, aaron morton wrote:
> Can you go to those nodes and run describe cluster ? Also check the logs
> on the machines that are marked as UNREACHABLE .
>
> A node will be marked as UNREA
Hi,
We are trying to estimate the amount of storage we need for a production
cassandra cluster. While I was doing the calculation, I noticed a very
dramatic difference in terms of storage space used by cassandra data files.
Our previous setup consists of a single-node cassandra 0.8.x with no
rep
Actually, after I read an article on cassandra 1.0 compression just now (
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-compression), I
am more puzzled. In our schema, we didn't specify any compression options
-- does cassandra 1.0 perform some default compression? or is the data
red
Hey guys,
We have a fresh 4 node 0.8.10 cluster that we want to pump lots of data into.
The data resides on 5 data machines that are different from Cassandra
nodes. Each of these data nodes has 7 disks where the data resides.
In order to get maximum load performance, we are assigning 7 ips to
each
well, no. my assumption is that he knows what the 5 itemTypes (or
appropriate corresponding ids) are, so he can do a known 5-rowkey
lookup. if he does not know, then agreed, my proposal is not a great fit.
could do (as originally suggested)
userId -> itemType:activityId
if you want to keep
Hi,
Here is the stack trace that we get from sstableloader
org.apache.thrift.transport.TTransportException: java.net.ConnectException:
Connection refused
java.lang.RuntimeException:
org.apache.thrift.transport.TTransportException: java.net.ConnectException:
Connection refused
at
org.apache.ca
Where the F^$% have the packages for 06x gone?
http://www.apache.org/dist/cassandra/debian/dists/06x/main/binary-amd64/
Is empty. What gives?
We are currently using 1.0.0-2 version. Do we still need to migrate to the
latest release of 1.0 before migrating to 1.1? Looks like incompatibility
is only between 1.0.3-1.0.8.
On Tue, Mar 27, 2012 at 6:42 AM, Benoit Perroud wrote:
> Thanks for the quick feedback.
>
> I will drop the schema t
On 03/28/2012 07:45 PM, Ashley Martens wrote:
> Where the F^$% have the packages for 06x gone?
Easy there, pardner.
> http://www.apache.org/dist/cassandra/debian/dists/06x/main/binary-amd64/
>
> Is empty. What gives?
While the repository Packages list does appear to be empty, the 0.6.13
package
Using this apt source list:
deb http://www.apache.org/dist/cassandra/debian 06x main
deb-src http://www.apache.org/dist/cassandra/debian 06x main
E: Package 'cassandra' has no installation candidate
Has the apt source changed?
On Wed, Mar 28, 2012 at 7:18 PM, Michael Shuler wrote:
> On 03/
Hi,
We are using Cassandra JDBC driver (found in [1]) to call to Cassandra
sever using CQL and JDBC calls. One of the main disadvantage is, this
driver is not available in maven repository where people can publicly
access. Currently we have to checkout the source and build ourselves. Is
there any
correct - I see also no other solution for this problem
On Thu, Mar 29, 2012 at 1:46 AM, Guy Incognito wrote:
> well, no. my assumption is that he knows what the 5 itemTypes (or
> appropriate corresponding ids) are, so he can do a known 5-rowkey lookup.
> if he does not know, then agreed, my p
27 matches
Mail list logo