I agree.
The probem could be while redistributing the tokens. In that case the
hashes has to be recalculated on each fo the candidate node.
-Thanks,
Prasenjit
On Thu, Jul 19, 2012 at 12:19 PM, Patrik Modesto
wrote:
> Hi,
>
> I know that RandomPartitioner does MD5 of a key and the MD5 is then
>
Hi Prasenjit,
I don't see the need to recalculate anything. One key has a one MD5
hash, it doesn't change. Just use the hash to select a node, than just
the plain key. Can you elaborate on the redistribution please?
Regards,
P.
On Thu, Jul 19, 2012 at 9:09 AM, prasenjit mukherjee
wrote:
> The p
Hi,
I tried to add a node a few days ago and it failed. I finally made it
work with an other node but now when I describe cluster on cli I got
this :
Cluster Information:
Snitch: org.apache.cassandra.locator.Ec2Snitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema version
I got that a couple of time (due to DNS issues in our infra)
what you could try
- check in cassandra log. It is possible you see a log line telling
you 10.56.62.211
and 10.59.21.241 o 10.58.83.109 share the same token
- if 10.56.62.211 is up, try decommission (via nodetool)
- if not, move 10.59.2
Hi, I wasn't able to see the token used currently by the 10.56.62.211
(ghost node).
I already removed the token 6 days ago :
-> "Removing token 170141183460469231731687303715884105727 for /10.56.62.211"
"- check in cassandra log. It is possible you see a log line telling
you 10.56.62.211 and 10.
Not sure if this may help :
nodetool -h localhost gossipinfo
/10.58.83.109
RELEASE_VERSION:1.1.2
RACK:1b
LOAD:5.9384978406E10
SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8
DC:eu-west
STATUS:NORMAL,85070591730234615865843651857942052864
RPC_ADDRESS:0.0.0.0
/10.248.10.94
RELEASE_VERSIO
Hi all, I have a problem with counters I'd like to solve before going in
production.
When a user write a comment in my platform I increase a counter (there is a
counter for each user) and I write a new column in the user specific row.
Everything worked fine but yesterday I noticed that the column
When a request for token change is issued ( via nodetool ) then on
what basis a node will move some of its rows to other node, as there
will be no way to scan rows based on MD5 hash in a given node ( if the
keys are not prefixed with MD5 hash )
On Thu, Jul 19, 2012 at 1:43 PM, Patrik Modesto
wrot
We have a CF with 11 secondary indexes (don't ask me why) and I noticed
restarting cassandra takes much longer time comparing to other clusters without
secondary indexes. In system.log I see 20 mins pause on building index.
this example shows a 12 mins gap.
INFO [SSTableBatchOpen:13] 2012-07-1
What does "show schema" show? Is the CF showing up?
Are the data files for the CF on disk?
If you poke around with the system CFs, is there any data still present?
On 07/17/2012 02:54 PM, sj.climber wrote:
Looking for ideas on how to diagnose this issue. I have installed v1.1.2 on
a two-node
But isn't QUORUM on a 2-node cluster still 2 nodes?
On 07/17/2012 11:50 PM, Jason Tang wrote:
Yes, for ALL, it is not good for HA, and because we meet problem when
use QUORAM, and current solution is switch Write:QUORAM / Read:QUORAM
when got "UnavailableException" exception.
2012/7/18 Jay Pa
In Cassandra you don't read-then-write updates, you just write the updates.
Sorry for being dense, but can you clarify a logical vs. physical row?
Batching is useful for reducing round trips to the server.
On 07/18/2012 06:18 AM, Leonid Ilyevsky wrote:
I have a question about efficiency of up
Hello All,
We currently have a 0.8 production cluster that I would like to upgrade to 1.1.
Are there any know compatibility or upgrade issues between 0.8 and 1.1? Can a
rolling upgrade be done or is it all-or-nothing?
Thanks!
-Chris
> We currently have a 0.8 production cluster that I would like to upgrade to
> 1.1. Are there any know compatibility or upgrade issues between 0.8 and 1.1?
> Can a rolling upgrade be done or is it all-or-nothing?
If you have lots of keys: https://issues.apache.org/jira/browse/CASSANDRA-3820
--
/
> Three node cluster with replication factor of 3 gets me around 10 ms 100%
> writes with consistency equal to ONE. The reads are really bad and they are
> around 65ms.
Using CL ONE in that situation, with a test that runs in a tight loop, can
result in the clients overloading the cluster.
Ev
Check the logs server to see if any errors are reported. If possible can you
change the logging to DEBUG and run it ?
> Note that the UUID did not change,
Sounds fishy.
There is an issue fixed in 1.1.3 similar to this
https://issues.apache.org/jira/browse/CASSANDRA-4432 But I doubt it applies
Hi
For some consistency problem, we can not use delete direct to delete
one row, and then we use TTL for each column of the row.
We using the Cassandra as the central storage of the stateful system.
All request will be stored in Cassandra, and marked as status;NEW, and then
we change it
17 matches
Mail list logo