On Wed, Dec 18, 2024 at 11:50 AM Paul Chandler wrote:
>
> This is all old history and has been fixed, so is not really what the
> question was about, however these old problems have a bad legacy in the
> memory of the people that matter. Hence the push back we have now.
>
I totally understand th
OK, it seems like I didn’t explain it too well, but yes it is the rolling
restart 3 times as part of the upgrade that is causing the push back, my
message was a bit vague on the use cases because there are confidentiality
agreements in place so I can’t share too much.
We have had problems in th
On Wed, Dec 18, 2024 at 12:26 PM Jeff Jirsa wrote:
> I think this is one of those cases where if someone tells us they’re
> feeling pain, instead of telling them it shouldn’t be painful, we try to
> learn a bit more about the pain.
>
> For example, both you and Scott expressed surprise at the con
On Wed, Dec 18, 2024 at 12:12 PM Jon Haddad wrote:
> I think we're talking about different things.
>
> > Yes, and Paul clarified that it wasn't (just) an issue of having to do
> rolling restarts, but the work involved in doing an upgrade. Were it only
> the case that the hardest part of doing a
I think this is one of those cases where if someone tells us they’re feeling
pain, instead of telling them it shouldn’t be painful, we try to learn a bit
more about the pain.
For example, both you and Scott expressed surprise at the concern of rolling
restarts (you repeatedly, Scott mentioned t
Yeah, the issue with the yaml being out of sync is consistent with any
other JMX change, such as compaction throughput / threads, etc. You'd have
to deploy the config and apply the change via JMX otherwise you'd risk
restarting the node and running into an issue.
I think there's probably room for
I think we're talking about different things.
> Yes, and Paul clarified that it wasn't (just) an issue of having to do
rolling restarts, but the work involved in doing an upgrade. Were it only
the case that the hardest part of doing an upgrade was the rolling
restart...
>From several messages a
It's clear from discussion on this list that the current "storage_compatibility_mode" implementation and upgrade path for 5.0 is a source of real and legitimate user pain, and is
likely to result in many organizations slowing their adoption of the release. Would love to discuss on dev@ how we can
On Wed, Dec 18, 2024 at 11:43 AM Jon Haddad wrote:
> > We (Wikimedia) have had more (major) upgrades go wrong in some way, than
> right. Any significant upgrade is going to be weeks —if not months— in the
> making, with careful testing, a phased rollout, and a workable plan for
> rollback. We'd
> We (Wikimedia) have had more (major) upgrades go wrong in some way, than
right. Any significant upgrade is going to be weeks —if not months— in the
making, with careful testing, a phased rollout, and a workable plan for
rollback. We'd never entertain doing more than one at a time, it's just
way
On Tue, Dec 17, 2024 at 2:37 PM Paul Chandler wrote:
> It is a mixture of things really, firstly it is a legacy issue where there
> have been performance problems in the past during upgrades, these have now
> been fixed, but it is not easy to regain the trust in the process.
>
> Secondly there ar
The ability to move through the SCM via the nodetool would definitely help in
this situation. I can see there being an issue is the cassandra.yaml is not
changed, as the node could revert back to an older mode if the node is
restarted.
Would there be any other potential problems with exposing
12 matches
Mail list logo