hi everyone
On the development server, I dropped all the tables and even keyspace dropped
to change the table schema.
Then I created the keyspace and the table.
However, the actual size of the data directory did not decrease at all. Disk
Load monitored by JMX has been decreased.
After that,
Thanks for the quick response. TWCS is used.
> On 12 Jan 2018, at 11:38 AM, Jeff Jirsa wrote:
>
> Probably not in any measurable way.
>
> --
> Jeff Jirsa
>
>
>> On Jan 11, 2018, at 6:16 PM, Eunsu Kim wrote:
>>
>> Hi everyone
>>
>> We are collecting monitoring data in excess of 100K TPS
Probably not in any measurable way.
--
Jeff Jirsa
> On Jan 11, 2018, at 6:16 PM, Eunsu Kim wrote:
>
> Hi everyone
>
> We are collecting monitoring data in excess of 100K TPS in Cassandra.
>
> All data is time series data and must have a TTL.
>
> Currently we have set default_time_to_liv
No. Nothing measurable. In fact it should be beneficial if paired with TWCS
On 12 Jan. 2018 1:17 pm, "Eunsu Kim" wrote:
Hi everyone
We are collecting monitoring data in excess of 100K TPS in Cassandra.
All data is time series data and must have a TTL.
Currently we have set default_time_to_liv
Hi everyone
We are collecting monitoring data in excess of 100K TPS in Cassandra.
All data is time series data and must have a TTL.
Currently we have set default_time_to_live on the table.
Does this have a negative impact on Cassandra throughput performance?
Thank you in advance.
--
Documentation for Virtual box networking can be found here:
https://www.virtualbox.org/manual/ch06.html
You can probably achieve what you're set up using two separate internal
networks and some custom routing on the Windows 10 host.
This question may be better suited for the virtualbox user maili
Hi ,
Can you please help me to know, how to set up Networks into ORACLE Virtual
Box machine by which I can able to achieve multiple DC multiple Node
configuration.
I am using window 10 as the host machine. I tried to setup using NAT but it
is not working.
Please provide some idea. Thanks.
Nanda
You should be able to avoid querying the tombstones if it's time series
data. Using TWCS just make sure you don't query data that you know is
expired (assuming you have the time component in your clustering key).
Can you elaborate? What interrupt are you referring to? That's a perfectly
legitimate topology, it's usefulness depends on your application.
On 11 January 2018 at 13:04, Peng Xiao <2535...@qq.com> wrote:
> Hi there,
>
> We plan to set keyspace1 in DC1 and DC2,keyspace2 in DC3 and DC4,all still
>
Dropped mutations aren't data loss. Data loss implies the data was already
there and is now gone, whereas for a dropped mutation the data was never
there in the first place. A dropped mutation just results in a
inconsistency, or potentially no data if all mutations are dropped, and C*
will tell you
Hello Jeff,
Thank you for the reply.
One doubt - If i copy the /var/lib/cassandra one to one from source cluster
to destination cluster nodes and change the cluster name in configuration
and delete system.peers table and restart each cassandra node, do you think
the cluster will come up properly.
Make sure the new cluster has a different cluster name, and avoid copying
system.peers if you can avoid it. Doing so risks merging your new cluster and
old cluster if they’re able to reach each other.
--
Jeff Jirsa
> On Jan 11, 2018, at 1:41 AM, Pradeep Chhetri wrote:
>
> Thank you very muc
Hi there,
We plan to set keyspace1 in DC1 and DC2,keyspace2 in DC3 and DC4,all still in
the same cluster,to avoid the interrupt.Is there any potential risk for this
architecture?
Thanks,
Peng Xiao
Thank you very much Jean. Since i don't have any constraints, as you said,
i will try copying the complete keyspace system node by node first and will
do nodetool refresh and see if it works.
On Thu, Jan 11, 2018 at 3:21 PM, Jean Carlo
wrote:
> Hello,
>
> Basically, every node has to have the
Hello,
Basically, every node has to have the same token range. So yes you have to
play with initial_token having the same numbers of tokens per node like the
cluster source. To save time and if you dont have any constraints about the
name of the cluster etc. you can just copy and paste the complet
Hello Jean,
I am running cassandra 3.11.1.
Since i dont have much cassandra operations experience yet, I have a
follow-up question - how can i ensure the same token ranges distribution ?
Do i need to set initial_token configuration for each cassandra node ?
Thank you for the quick response.
Hello Pradeep,
Actually the key here is to know if your cluster has the same token ranges
distribution. So it is not only the same size but also the same tokens
match node by node, from cluster source to cluster destination. In that
case, you can use nodetool refresh.So after copy all your sstable
Hello Alain,
Thanks for the praise, and be aware that your hardwork has been noticed
(at least by us) as we link in the readme toward your post http://thela
stpickle.com/blog/2017/12/05/datadog-tlp-dashboards.htmlThanks for
having spend time to write it, as I you said creating comprehensive
dashboa
Thanks for the tips, Alan. The cluster is entirely healthy. But the
connection between DCs is a VPN, managed by a third party - it is
possible it might be flaky. However, I would expect the rebuild job to
be able to recover from connection timeout/reset type of errors
without a need for manual int
Hello everyone,
We are running cassandra cluster inside containers over Kubernetes. We have
a requirement where we need to restore a fresh new cluster with existing
snapshot on weekly basis.
Currently, while doing it manually. i need to copy the snapshot folder
inside container and then run sstab
Hello
Could the following be interpreted as, 'Dropped Mutations', in some cases
mean data loss?
http://cassandra.apache.org/doc/latest/faq/index.html#why-message-dropped
For writes, this means that the mutation was not applied to all replicas it
was sent to. The inconsistency will be repaired by r
Hi I'm having a problem after upgrading Cassandra 3.0.14 to 3.11.1.
When running a repair the node I'm issuing the repair starts to use use all
available memory and then the GC grinds it to death. I'm seeing with
nodetool compactionstats that there is a lot of (> 100) active concurrent
compaction
22 matches
Mail list logo