So I got past the leaving problem once I found the removetoken force command.
Now I'm trying to move tokens and that will never complete either, but as I was
watching netstats for streaming to the moving node I noticed it seemed to stop
all of a sudden and list no more pending streams. At the
Thanks. It could be hidden from a human operator, I suppose :)
On 12/13/2011 7:12 PM, Harold Nguyen wrote:
Hi Maxim,
The reason for this is because if node 1 goes down while you deleted
information on node 2, node 1 will know not to repair the data when it comes
back again. It will know th
Hi Maxim,
The reason for this is because if node 1 goes down while you deleted
information on node 2, node 1 will know not to repair the data when it comes
back again. It will know that an operation has been performed to delete the
data.
Harold
-Original Message-
From: Maxim Potekhi
The cli's 'list' command is the same as get_range_slices(), which is the
one type of query where you can get back range ghosts (deleted keys).
On Tue, Dec 13, 2011 at 6:02 PM, Maxim Potekhin wrote:
> Hello,
>
> I searched the archives and it appears that this question was once asked
> but
> was
Hello,
I searched the archives and it appears that this question was once asked but
was not answered. I just deleted a lot of rows, and want to "list" in
cli. I still see
the keys. This is not the same as getting slices, is it? Anyhow, what's
the reason
and rationale? I run 0.8.8.
Thanks
Max
In general, write is faster than read in Cassandra.
maki
On 2011/12/13, at 13:54, Waqar Azeem wrote:
> Hi,
>
> 'threads' are nested in a 'forum', therefore, I decided to create a
> column-family 'thread' with a column named 'parent'.
>
>
> Is this idea matched with Cassandra philosophy? Be
On Tue, Dec 13, 2011 at 1:46 PM, Matthew Stump wrote:
> If I create a column family with a comparator of:
>
> Â DynamicCompositeType(a=>UTF8Type,b=>UTF8Type,c=>UTF8Type,t=>TimeUUIDType)
>
> Can I do an insert that has a column where a,b,c,t are set and an insert with
> a,b,t are set, or must I alw
We got the server to start up by applying the patch from the mentioned JIRA,
and modified the configuration parameters reader code to set
chunk_lenghth_size to null on start so that cassandra will use the default
value of 64k. We then deployed the code to the rest of the servers in the
cluster, a
If I create a column family with a comparator of:
DynamicCompositeType(a=>UTF8Type,b=>UTF8Type,c=>UTF8Type,t=>TimeUUIDType)
Can I do an insert that has a column where a,b,c,t are set and an insert with
a,b,t are set, or must I always set values for colums a,b,c,t?
If I must always set values
Hi, this reminds me a problem I had with a truncate.
I think that to show schema, you're cluster must be stable and your schema
agreed across the cluster.
See if your cluster is really the one described by your nodetool ring and
if the same schema is present on each node.
You should visit this
Running cli --debug:
[default@PANDA] show schema;
null
java.lang.RuntimeException
at
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:310)
at
org.apache.cassandra.cli.CliMain.processStatement(CliMain.java:217)
at org.apache.cassandra.cli.CliMain.mai
Looks like this is related to bug in
https://issues.apache.org/jira/browse/CASSANDRA-3558.
Show schema shows sstable compression chunk_length_kb on the node that the
schame was applied is 65536, thought the schema update statement specified
chunk_length_kb=64, and on other nodes on the cluster
I don't think so. We are using size tiered compaction, not levelled
compaction, and there are only 76 SSTables for the entire keyspace (375
files currently in the directory).
Dan
From: Jeremiah Jordan [mailto:jeremiah.jor...@morningstar.com]
Sent: December-13-11 10:09
To: user@cassandra.ap
Does your issue look similar this one?
https://issues.apache.org/jira/browse/CASSANDRA-3532
It is also dealing with compactaion taking 10X longer in 1.0.X
On 12/13/2011 09:00 AM, Dan Hendry wrote:
I have been observing that major compaction can be incredibly slow in
Cassandra 1.0 and was curio
I have been observing that major compaction can be incredibly slow in
Cassandra 1.0 and was curious the extent to which anybody else has noticed
similar behaviour. Essentially I believe the problem involves the
combination of wide rows and expiring columns.
Relevant details included in:
https:
15 matches
Mail list logo