Schema propagation takes times:
https://issues.apache.org/jira/browse/CASSANDRA-5725

@Robert: do we still need to cleanup manually snapshot when truncating ? I
remembered that on the 1.2 branch, even though the auto_snapshot param was
set to false, truncating leads to snapshot creation that forced us to
manually remove the snapshot folder on disk


On Sat, Jun 21, 2014 at 12:01 AM, Robert Stupp <sn...@snazy.de> wrote:

>
> Am 20.06.2014 um 23:48 schrieb Pavel Kogan <pavel.ko...@cortica.com>:
>
> > 1) When new keyspace with its columnfamilies is being just created
> (every round hour), sometimes other modules failed to read/write data, and
> we lose request. Can it be that creation of keyspace and columnfamilies is
> async operation or there is propagation time between nodes?
>
> Schema needs to "settle down" (nodes actually agree on a common view) -
> this may take several seconds until all nodes have that common view. Turn
> on DEBUG output in Java driver for example to see these messages.
> CL ONE requires the "one" node to be up and running - if that node's not
> running your request will definitely fail. Maybe you want to try CL ANY or
> increase RF to 2.
>
> > 2) We are reading and writing intensively, and usually I don't need the
> data for more than 1-2 hours. What optimizations can I do to increase my
> small cluster read performance? Cluster configuration - 3 identical nodes:
> i7 3GHz, SSD 120Gb, 16Gb RAM, CentOS 6.
>
> Depending on the data, table layout, access patterns and C* version try
> with various key cache and maybe row cache configurations in both table
> options and cassandra.yaml
>
>

Reply via email to