I don't have time to look into the reasons for that error, but that does not
sound good. It kind of sounds like there are multiple migration chains out
there in the cluster. This could come from apply changes to different nodes at
the same time.
Is this a prod system ? If not I would shut it d
And a lot of "not apply" logs.
DEBUG [MigrationStage:1] 2011-08-10 11:36:29,376
DefinitionsUpdateVerbHandler.java (line 70) Applying AddColumnFamily from
/192.168.1.9
DEBUG [MigrationStage:1] 2011-08-10 11:36:29,376
DefinitionsUpdateVerbHandler.java (line 80) Migration not applied Previous
ver
Hi Aaron,
I set the log level to be DEBUG, and find a lot of forceFlush debug info in the
log:
DEBUG [StreamStage:1] 2011-08-10 11:31:56,345 ColumnFamilyStore.java (line 725)
forceFlush requested but everything is clean
DEBUG [StreamStage:1] 2011-08-10 11:31:56,345 ColumnFamilyStore.java (line
um. There has got to be something stopping the migration from completing.
Turn the logging up to DEBUG before starting and look for messages from
MigrationManager.java
Provide all the log messages from Migration.java on the 1.27 node
Cheers
-
Aaron Morton
Freelance Cassandra
Hi Aaron,
I repeat the whole procedure:
1. kill the cassandra instance on 1.27.
2. rm the data/system/Migrations-g-*
3. rm the data/system/Schema-g-*
4. bin/cassandra to start the cassandra.
Now, the migration seems stop and I do not find any error in the system.log yet.
The ring looks good:
[
did you check the logs in 1.27 for errors ?
Could you be seeing this ? https://issues.apache.org/jira/browse/CASSANDRA-2867
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 7 Aug 2011, at 16:24, Dikang Gu wrote:
> I restart both
I restart both nodes, and deleted the shcema* and migration* and restarted
them.
The current cluster looks like this:
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema v
After there restart you what was in the logs for the 1.27 machine from the
Migration.java logger ? Some of the messages will start with "Applying
migration"
You should have shut down both of the nodes, then deleted the schema* and
migration* system sstables, then restarted one of them and wat
I have tried this, but the schema still does not agree in the cluster:
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
UNREACHABLE: [192.168.1.28]
75eece10-b
Based on http://wiki.apache.org/cassandra/FAQ#schema_disagreement,
75eece10-bf48-11e0--4d205df954a7 own the majority, so shutdown and
remove the schema* and migration* sstables from both 192.168.1.28 and
192.168.1.27
2011/8/5 Dikang Gu :
> [default@unknown] describe cluster;
> Cluster Informa
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
743fe590-bf48-11e0--4d205df954a7: [192.168.1.28]
75eece10-bf48-11e0--4d205df954a7: [192.168.1.9, 192.1
11 matches
Mail list logo