Thank you all,
The issue was resolved (or more exactly bypassed) by adding small python
script running hourly in cron on 1-2 nodes, which pre-provision next hour
keyspace. One hour is definitely enough time for scheme propagation.
Regards,
Pavel
On Sun, Jun 22, 2014 at 9:35 AM, Robert Stupp
Am 21.06.2014 um 00:37 schrieb Pavel Kogan :
> Thanks,
>
> Is there any code way to know when the scheme finished to settle down?
Yep - take a look at
com.datastax.driver.core.ControlConnection#waitForSchemaAgreement in the Java
Driver source. It basically compares the 'schema_version' column
On Fri, Jun 20, 2014 at 3:09 PM, DuyHai Doan wrote:
> @Robert: do we still need to cleanup manually snapshot when truncating ?
I remembered that on the 1.2 branch, even though the auto_snapshot param
was set to false, truncating leads to snapshot creation that forced us to
manually remove the snap
Thanks,
Is there any code way to know when the scheme finished to settle down?
Can working RF=2 and CL=ANY result in any problem with consistency? I am
not sure I can have problems with consistency if I don't do updates, only
writes and reads. Can I?
By the way I am using Cassandra 2.0.8.
Pavel
Thanks Robert,
Can you please explain what problems DROP/CREATE keyspace may cause?
Seems like truncate working per column family and I have up to 10.
What I should I delete from disk in that case? I can't delete whole folder
right? I need to delete all content under each cf folder, but not folder
Schema propagation takes times:
https://issues.apache.org/jira/browse/CASSANDRA-5725
@Robert: do we still need to cleanup manually snapshot when truncating ? I
remembered that on the 1.2 branch, even though the auto_snapshot param was
set to false, truncating leads to snapshot creation that forced
On Fri, Jun 20, 2014 at 2:48 PM, Pavel Kogan
wrote:
> So what we did is creating every hour new keyspace named _MM_dd_HH and
> when disk becomes full, script running in crontrab on each node drops
> keyspace with "IF EXISTS" flag, and deletes whole keyspace folder. That way
> whole process is
Am 20.06.2014 um 23:48 schrieb Pavel Kogan :
> 1) When new keyspace with its columnfamilies is being just created (every
> round hour), sometimes other modules failed to read/write data, and we lose
> request. Can it be that creation of keyspace and columnfamilies is async
> operation or there
Hi,
In our project, many distributed modules sending each other binary blobs,
up to 100-200kb each in average. Small JSONs are being sent over message
queue, while Cassandra is being used as temporary storage for blobs. We are
using Cassandra instead of in memory distributed cache like Couch due t