Hi,
I have to add a column and value to current data. I tried with batch
statement but got warning at cassandra as
WARN [Native-Transport-Requests:1423504] 2014-10-22 16:49:11,426
BatchStatement.java (line 223) Batch of prepared statements for
[keyspace.table] is of size 26600, exceeding specifie
You should split your batch statements into smaller batches, say 100
operations per batch (or less if you keep getting those errors). You can
also grow the batch_size_warn_threshold_in_kb in your cassandra.yaml a bit,
I'm using 20kb in my cluster. You can read more from the relevant Jira:
https://i
I'm having problems running nodetool repair -inc -par -pr on my 2.1.1
cluster due to "Did not get positive replies from all endpoints" error.
Here's an example output:
root@db08-3:~# nodetool repair -par -inc -pr
[2014-10-30 10:33:02,396] Nothing to repair for keyspace 'system'
[2014-10-30 10:33:
It appears to come from the ActiveRepairService.prepareForRepair portion of the
Code.
Are you sure all nodes are reachable from the node you are initiating repair
on, at the same time?
Any Node up/down/died messages?
Rahul Neelakantan
> On Oct 30, 2014, at 6:37 AM, Juho Mäkinen wrote:
>
> I
No, the cluster seems to be performing just fine. It seems that the
prepareForRepair callback() could be easily modified to print which node(s)
are unable to respond, so that the debugging effort could be focused
better. This of course doesn't help this case as it's not trivial to add
the log lines
I will give a shot adding the logging.
I've tried some experiments and I have no clue what could be happening
anymore:
I tried setting all nodes to a streamthroughput of 1 except 1, to see if
somehow it was getting overloaded by too many streams coming in at once,
nope.
I went through the source
On Wed, Oct 29, 2014 at 10:39 PM, Aravindan T wrote:
> What could be the reasons for the stream error other than SSTABLE
> corruption?
>
There's tons of reasons streams fail. Cassandra team is aware of how
painful it makes things, so they are working on them.
Be sure that a firewall is not drop
I've been trying to go through the logs but I can't say I understand very
well the details:
INFO [SlabPoolCleaner] 2014-10-30 19:20:18,446 ColumnFamilyStore.java:856
- Enqueuing flush of loc: 7977119 (1%) on-heap, 0 (0%) off-heap
DEBUG [SharedPool-Worker-22] 2014-10-30 19:20:18,446
AbstractSimple
Well, the answer was Secondary indexes. I am guessing they were corrupted
somehow. I dropped all of them, cleanup, and now nodes are bootstrapping
fine.
On Thu, Oct 30, 2014 at 3:50 PM, Maxime wrote:
> I've been trying to go through the logs but I can't say I understand very
> well the details: