It's clear from discussion on this list that the current "storage_compatibility_mode" implementation and upgrade path for 5.0 is a source of real and legitimate user pain, and is
likely to result in many organizations slowing their adoption of the release. Would love to discuss on dev@ how we can
Hi Jeff, Repair is not a prerequisite for upgrading from 3.x to 4.x (but it's always recommended to run as a continuous process). Repair is not supported between nodes
running different major versions, so it should be disabled during the upgrade. There are quite a few fixes for hung repair sessio
If you don't have an explicit goal of dropping compact storage, it's not necessary to as a
prerequisite to upgrading to 4.x+. Development community members recognized that
introducing mandatory schema changes as a prerequisite to upgrading to 4.x would increase
operator + user overhead and limi
The “Since Version” for the ticket is set to 3.0.19, presumably based on
C-14096 as the predecessor for this ticket.
C-14096 was merged up into 3.11.x in the 3.11.5 release, so 3.11.5 would be the
equivalent “since version” for that release series. The patch addressing this
ticket is included i
Pardon me, that should read user-unsubscr...@cassandra.apache.org for this list. On Jan 31, 2024, at 10:07 AM, C. Scott
Andreas wrote: Hi Matt, To unsubscribe from this list, send a blank email to
dev-unsubscr...@cassandra.apache.org . All messages or replies to the list are distributed to all
Hi Matt, To unsubscribe from this list, send a blank email to dev-unsubscr...@cassandra.apache.org . All messages
or replies to the list are distributed to all subscribers of the list. As the project is volunteer-run, others are
not able to take this action on behalf of subscribers. For more det
Upgrading from 3.11.x to 4.1.x is supported, yes. As the documentation you
reference mentions, it is not possible to downgrade from 4.x to 3.x.
Note that running repair during upgrades is not supported; please ensure it is
disabled before beginning the upgrade and re-enable after.
– Scott
> O
The recommended approach to upgrading is to perform a replica-safe rolling restart of instances in
each datacenter, one datacenter at a time. > In case of an upgrade failure, would it be possible
to remove the data center from the cluster, restore the datacenter to C*3 SW and add it back to
clu
Bowen, thanks for reaching out.My mind immediately jumped to a ticket which has very similar
pathology: "CASSANDRA-18110: Streaming progress virtual table lock contention can trigger
TCP_USER_TIMEOUT and fail streaming" -- but I see this was fixed in 4.1.1.On Sep 11, 2023,
at 2:09 PM, Bowen Son
“select * from …” without a predicate from a user table would be very
expensive, yes.
A query from a small, node-local system table such as “select * from
system.peers” would make a better health check. 👍
- Scott
> On Aug 25, 2023, at 10:58 AM, Raphael Mazelier wrote:
>
>
> Mind that a new
A few thoughts on this:– 80TB per machine is pretty dense. Consider the amount of data
you'd need to re-replicate in the event of a hardware failure that takes down all 80TB
(DIMM failure requiring replacement, non-reduntant PSU failure, NIC, etc).– 24GB of heap
is also pretty generous. Dependi
Hi Mark,You can unsubscribe from this mailing list by sending a blank email to
"user-unsubscr...@cassandra.apache.org" from the address that is subscribed to the list.
Other members of the list are not able to take this action on someone's behalf.Details on how to join
and leave lists are here:
Vaibhav, thank you for reaching out and sharing this issue report.Could you run an
`lsof` and share which SSTable files you see open (e.g., all SSTable components or a
subset of them); and also share the value of the `disk_access_mode` property from
your cassandra.yaml?Opening a Jira ticket for
That’s correct, yes. There is no current or upcoming version of Apache Cassandra in which materialized views are expected to be considered production-ready and maintain full consistency with their base table at this time.The feature is classified as “experimental” to indicate that this behavior is
The performance implications would primarily be due to the challenge of handling mutations this
large themselves rather than the commitlog segment size. These would occupy large, contiguous
areas of heap and increase memory pressure in the process.Increasing
commit_log_segment_size_in_mb is lik
Can you check the write timestamp of the data you're attempting to
delete?https://docs.datastax.com/en/cql-oss/3.3/cql/cql_using/useWritetime.htmlIf the timestamp
of the write is in the future (e.g., due to a time sync issue or an errant client-supplied
timestamp at the time of that write), the
Hugely excited to this – thanks to the Program Committee and to the Linux Foundation
for organizing!It's been a long few years away from conferences and I can't wait to
see all of you.Beyond learning about what everyone is doing with Apache Cassandra,
I'm looking forward to the hallway chats an
Bumping this note from Andy downthread to make sure everyone has seen it and is aware:“Before you do that, you will want to make sure a cycle of repairs has run on the replicas of the down node to ensure they are consistent with each other.”When replacing an instance, it’s necessary to run repair (
Hi Vaibhav, thanks for reaching out.Based on my understanding of this exception, this may be due to the index for this partition
exceeding 2GiB (which is *extremely* large for a partition index component).Reducing the size of the column index below 2GiB may
resolve this issue. You may be able to
Thanks for reaching out.
Changing the compressor for a table is both safe and common. Future flushes /
compactions will use the new codec as SSTables are written, and SSTables
currently present on disk will remain readable with the previous codec.
You may also want to take a look at the Zstanda
No downside at all for 3.x -> 4.x (however, Cassandra 3.x reading 2.1 SSTables incurred a
performance hit).Many users of Cassandra don't run upgradesstables after 3.x -> 4.x upgrades at
all. It's not necessary to run until a hypothetical future time if/when support for reading
Cassandra 3.x SST
> but still as I understand the documentation the read repair should not be in the blocking path of a query ?Read repair is in the blocking read path for the query, yep. At quorum consistency levels, the read repair must complete before returning a result to the client to ensure the data returned w
work fine?
>
> Jaydeep
>
>> On Mon, Jun 13, 2022 at 10:25 PM C. Scott Andreas
>> wrote:
>> Thank you for reaching out, and for planning the upgrade!
>>
>> Upgrading from 3.0.14 to 3.0.27 would be best, followed by upgrading to
>> 4.0.4.
>>
&
Thank you for reaching out, and for planning the upgrade!
Upgrading from 3.0.14 to 3.0.27 would be best, followed by upgrading to 4.0.4.
3.0.14 contains a number of serious bugs that are resolved in more recent 3.0.x
releases (3.0.19+ are generally good/safe). Upgrading to 3.0.27 will put you on
Hi Gil, thanks for reaching out.Can you check Cassandra's logs to see if any uncaught exceptions are
being thrown? What you described suggests the possibility of an uncaught exception being thrown in
the Gossiper thread, preventing further tasks from making progress; however I'm not aware of any
o from the documentation I see that 3.2 supports upto V5 version of protocol.Does this
mean a) 3.2 driver with V3 protocol works for cassandra 4.0 or b) I have to change the protocol version to V4
or higher on 3.2 to be able to work with 4.0?On Tue, Apr 19, 2022 at 11:15 AM C. Scott Andreas
The DataStax Java 3.x drivers work very well with Apache Cassandra 4.0. I'd recommend one of the
more recent releases in the series, though (e.g., 3.6.x+).I'm not the author of this
documentation, but it may refer to the fact that the 3.x Java Driver supports the CQL v4 wire
protocol, but not t
Hi Jaydeep, thanks for reaching out.The most notable deadlock identified and resolved in the last few
years is https://issues.apache.org/jira/browse/CASSANDRA-15367: Memtable memory allocations may deadlock
(fixed in Apache Cassandra 3.0.21).Mentioning for completeness - since the release of Cas
Hi Joe, it looks like "PT2M" may refer to a timeout value that could be set by your Spark job's
initialization of the client. I don't see a string matching this in the Cassandra codebase itself, but I do see
that this is parseable as a Duration.```jshell> java.time.Duration.parse("PT2M").getSeco
Hi Manish,
I understand this answer is non-specific and might not be the most helpful, but
figured I’d mention — Cassandra 3.11.2 is nearly four years old and a large
number of bugs in repair and other subsystems have been resolved in the time
since.
I’d recommend upgrading to the latest relea
Hi, I noticed that you mentioned your goal is to optimize write throughput, and that you're using
Cassandra 3.11.2.Optimizing for write throughput is usually a proxy for optimizing for compaction, as
the cost of writes are very cheap but compacting to keep up with it can be pretty expensive.You'
Hi James, thanks for reaching out.A large number of fixes have landed for Incremental
Repair in the 3.x series, though it's possible some may have been committed to 4.0
without a backport. Incremental repair works well on Cassandra 4.0.1. I'd start here
to ensure you're picking up all fixes tha
s implies extra
time and operational cost, hopefully within the boundaries of the revenue
stream the system is expected to support.
Pardon the long e-mail and for waxing a bit philosophical. I hope this provides
some food for thought.
- Scott
---
C. Scott Andreas
Engineer, Urban Airship, Inc.
ng that, and would like to figure out why we see long CMS collections +
promotion failures triggering full GCs during a snapshot.
Has anyone seen this, or have suggestions on how to prevent full GCs from
occurring during a flush / snapshot?
Thanks,
- Scott
---
C. Scott Andreas
Engineer, Urban Airship, Inc.
http://www.urbanairship.com
34 matches
Mail list logo