That wasn't horrible at all. After testing, provided all goes well I can
submit this back to the main TWCS repo if you think it's worth it.
Either way do you mind just reviewing briefly for obvious mistakes?
https://github.com/bspindler/twcs/commit/7ba388dbf41b1c9dc1b70661ad69273b258139da
Thank
I hope I can do as you suggest and leapfrog to 3.11 rather than two
stepping it from 3.7->3.11
Just having TWCS has saved me lots of hassle so it’s all good, thanks for
all you do for our community.
-B
On Fri, Nov 2, 2018 at 3:54 PM Jeff Jirsa wrote:
> I'm sincerely sorry for the hassle, but f
I'm sincerely sorry for the hassle, but for various reasons beyond, it's
unlikely I'll update my repo (at least me, personally). One fix is likely
to grab the actual java classes from the apache repo, pull them in and fix
the package names, and compile (essentially making your own 3.11 branch).
I
you are right, it won't even compile:
[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR]
/Users/bspindler/src/github/twcs/src/main/java/com/jeffjirsa/cassandra/db/compaction/T
There’s a chance it will fail to work - possible method signatures changed
between 3.0 and 3.11. Try it in a test cluster before prod
--
Jeff Jirsa
> On Nov 2, 2018, at 11:49 AM, Brian Spindler wrote:
>
> Nevermind, I spoke to quickly. I can change the cass version in the pom.xml
> and re
Nevermind, I spoke to quickly. I can change the cass version in the
pom.xml and re compile, thanks!
On Fri, Nov 2, 2018 at 2:38 PM Brian Spindler
wrote:
> [image: image.png]
>
>
> On Fri, Nov 2, 2018 at 2:34 PM Jeff Jirsa wrote:
>
>> Easiest approach is to build the 3.11 jar from my repo, upgr
[image: image.png]
On Fri, Nov 2, 2018 at 2:34 PM Jeff Jirsa wrote:
> Easiest approach is to build the 3.11 jar from my repo, upgrade, then
> ALTER table to use the official TWCS (org.apache.cassandra) jar
>
> Sorry for the headache. I hope I have a 3.11 branch for you.
>
>
> --
> Jeff Jirsa
>
>
Easiest approach is to build the 3.11 jar from my repo, upgrade, then ALTER
table to use the official TWCS (org.apache.cassandra) jar
Sorry for the headache. I hope I have a 3.11 branch for you.
--
Jeff Jirsa
> On Nov 2, 2018, at 11:28 AM, Brian Spindler wrote:
>
> Hi all, we're planning a
Hi all, we're planning an upgrade from 2.1.5->3.11.3 and currently we have
several column families configured with twcs class
'com.jeffjirsa.cassandra.db.compaction.TimeWindowCompactionStrategy' and
with 3.11.3 we need to set it to 'TimeWindowCompactionStrategy'
Is that a safe operation? Will cas
Taking a step back and in case you are hosted on a cloud - We are entirely
on AWS and upload snapshots, and incremental files periodically, to S3 as
part of the backup job. It gives us durability and the freedom to clean up
from locale sooner than otherwise would require. Currently, we also are
try
Lou,
when taking snapshot, set tag like "uuidgen --time" and use this tag to clear
out old snapshot.
On Friday, November 2, 2018, 9:28:03 AM PDT, Oleksandr Shulgin
wrote:
On Fri, Nov 2, 2018 at 5:15 PM Lou DeGenaro wrote:
I'm looking to hear how others are coping with snapshots.
Acco
On Fri, Nov 2, 2018 at 5:15 PM Lou DeGenaro wrote:
> I'm looking to hear how others are coping with snapshots.
>
> According to the doc:
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupDeleteSnapshot.html
>
> *When taking a snapshot, previous snapshot files are not auto
I'm looking to hear how others are coping with snapshots.
According to the doc:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsBackupDeleteSnapshot.html
*When taking a snapshot, previous snapshot files are not automatically
deleted. You should remove old snapshots that are no
Omnester,
Thanks a ton, will try to execute the same. I will look into that thread.
Thanks and Regards,
Goutham Reddy Aenugu.
On Fri, Nov 2, 2018 at 1:36 AM onmstester onmstester
wrote:
> I think that is not possible.
> If currently both DC's are in use, you should remove one of them (gently,
>
Is more information needed, for example from logs or verbose running? Is
anyone else seeing this behaviour?
Thanks.
Lou.
On 2018/10/30 15:36:38, Lou DeGenaro wrote:
> It seems that "nodetool listsnapshots" is unreliable?>
>
> 1. when issued, nodetool listsnapshots reports there are no snap
unlogged batch meaningfully outperforms parallel execution of individual
statements, especially at scale, and creates lower memory pressure on both the
clients and cluster. They do outperform parallel individuals, but in cost of
higher pressure on coordinators which leads to more blocked Native
I think that is not possible. If currently both DC's are in use, you should
remove one of them (gently, by changing replication config), then change
num_tokens in removed dc, add it again with changing replication config, and
finally do the same for the other dc. P.S A while ago, there was a thr
Onmstester,
Thanks for the reply, but for both the DC’s I need to change my num_token
value from 8 to 256. So that is the challenge I am facing. Any comments.
Thanks and Regards,
Goutham
On Fri, Nov 2, 2018 at 1:08 AM onmstester onmstester
wrote:
> IMHO, the best option with two datacenters is
IMHO, the best option with two datacenters is to config replication strategy to
stream data from dc with wrong num_token to correct one, and then a repair on
each node would move your data to the other dc Sent using Zoho Mail
Forwarded message From : Goutham reddy
To
Elliott,
Thanks Elliott, how about if we have two Datacenters, any comments?
Thanks and Regards,
Goutham.
On Thu, Nov 1, 2018 at 5:40 PM Elliott Sims wrote:
> As far as I know, it's not possible to change it live. You have to create
> a new "datacenter" with new hosts using the new num_tokens
22 matches
Mail list logo