Hi,
I'm running Cassandra with a very small dataset so that the data can exist
on memtable only. Below are my configurations:
In jvm.options:
-Xms4G
-Xmx4G
In cassandra.yaml,
memtable_cleanup_threshold: 0.50
memtable_allocation_type: heap_buffers
As per the documentation in cassandra.yaml, th
Thank you all for the response. I figured out the root cause.
I thought all my data was in memtable only but the data was actually being
dumped to the disk. That's why I was noticing the drop in throughput.
On Wed, May 24, 2017 at 9:42 AM, daemeon reiydelle
wrote:
> You speak of increase. Please
Cqlsh looks at the cluster, not node
“All men dream, but not equally. Those who dream by night in the dusty
recesses of their minds wake up in the day to find it was vanity, but the
dreamers of the day are dangerous men, for they may act their dreams with
open eyes, to make it possible.” — T.E. La
Run *nodetool cleanup* on the *4.4.4.5* DC node(s). Changing network
topology does not *remove* data - it's a manual task.
But it should prevent it from replicating over to the undesired DC.
Also make sure your LoadBalancingStrategy is set to DCAwareRoundRobinPolicy,
with *4.4.4.4* DC set as the
May I inquire if your configuration is actually data center aware? Do you
understand the difference between LQ and replication?
*Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8144 9872*
*“All men dream, but not equally. Those who dream by night in the dusty
recesses of their
Did you run `nodetool repair` after changing the keyspace? (not sure if it
makes sense though)
2017-05-16 19:52 GMT-03:00 Nitan Kainth :
> Strange. Anybody else might share something more important.
>
> Sent from my iPhone
>
> On May 16, 2017, at 5:23 PM, suraj pasuparthy
> wrote:
>
> Yes is see
You speak of increase. Please provide your results. Specific examples, Eg
25% increase results in n% increase. Also please include number of nodes,
size of total keyspace, rep factor, etc.
Hopefully this is a 6 node cluster with several hundred gig per keyspace,
not some single node free tier box.
Larger memtable means mor time during flushes and larger heap means longer GC
pauses. You can see these in system log
Sent from my iPhone
> On May 24, 2017, at 11:31 AM, preetika tyagi wrote:
>
> Hi,
>
> I'm experimenting with memtable/heap size on my Cassandra server to
> understand how it
Hi,
I'm experimenting with memtable/heap size on my Cassandra server to
understand how it impacts the latency/throughput for read requests.
I vary heap size (Xms and -Xmx) in jvm.options so memtable will be 1/4 of
this. When I increase the heap size and hence memtable, I notice the drop
in throug
Hi,
The list is opened :
https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user,
feel free to subscribe.
Datastax is the main maintainer of the java driver, which is open source (
https://github.com/datastax/java-driver ) , which is not the same driver as
the DSE one : http
Hi Nicolas
I think only DataStax Enterprise(paid) c* version can ask questions/get
support from datastax :(
On Tue, May 23, 2017 at 9:44 PM, techpyaasa . wrote:
> Thanks for your reply..
>
> On Tue, May 23, 2017 at 7:40 PM, Nicolas Guyomar <
> nicolas.guyo...@gmail.com> wrote:
>
>> Hi,
>>
>> If
It might be a bug.
Cassandra, AFAIK, scans those files for changes and updates the topology
(So you don't need a restart if you change the files). It might be the case
that the absence of the file, is still noticed by Cassandra even if it is
not really used.
I can do a small test to confirm, if so
Hi All,
We have a new observation.
Earlier for implementing multiple network interfaces, we were deleting
cassandra-topologies.properties in the last step (Steps are mentioned in mail
trail).
The rationale was that because we are using altogether a new endpoint_snitch ,
we don't require cassan
13 matches
Mail list logo