We didn't want to stop building on Centos 8, but the way it went end of
life and stopped doing any security updates forced our hand. See this
thread for details [0].
Essentially this made even building and testing with Centos 8 infeasible,
so we suggest users migrate to Centos 9 (so they continue
On Sun, Apr 28, 2024 at 11:50 PM Alwin Antreich
wrote:
> Hi Robert,
>
> well it says it in the article.
>
> > The upcoming Squid release serves as a testament to how the Ceph project
> > continues to deliver innovative features to users without compromising on
> > quality.
>
>
> I believe it is m
matters, please contact coun...@ceph.io and we’ll direct the
matter to the appropriate people.
Thanks,
Neha Ojha, Dan van der Ster, Josh Durgin
Ceph Executive Council
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph
solved, and even then we have to
> do both debian & ceph simultaneously That's an uncomfortable situation
> in several ways...
>
> On 21/08/2023 15:25, Josh Durgin wrote:
>
> There was difficulty building on bullseye due to the older version of GCC
> available: htt
https://tracker.ceph.com/issues/61845
On Mon, Aug 21, 2023 at 7:50 AM Matthew Darwin wrote:
> Thanks for the link to the issue. Any reason it wasn't added to the
> release notes (for bullseye).
>
> I am also waiting for this to be available to start testing.
> On 2023-08-21 10:25, Josh Du
There was difficulty building on bullseye due to the older version of GCC
available: https://tracker.ceph.com/issues/61845
On Mon, Aug 21, 2023 at 3:01 AM Chris Palmer wrote:
> I'd like to try reef, but we are on debian 11 (bullseye).
> In the ceph repos, there is debian-quincy/bullseye and
> de
Thanks for the report - this is being fixed in
https://github.com/ceph/ceph/pull/52343
On Wed, Jul 12, 2023 at 2:53 PM Stefan Kooman wrote:
> On 7/12/23 23:21, Yuri Weinstein wrote:
> > Can you elaborate on how you installed cephadm?
>
> Add ceph repo (mirror):
> cat /etc/apt/sources.list.d/ceph
smoke is all green on ubuntu as well, given the ceph-volume tests passed as
well, it looks good to go.
On Wed, Apr 12, 2023 at 8:41 AM Adam King wrote:
> Obviously the issue installing EPEL makes the runs look pretty bad. But,
> given the ubuntu based tests look alright, the EPEL stuff is likely
With the Reef dev cycle closing, it's time to think about S and future
releases.
There are a bunch of options for S already, add a +1 or a new option to
this etherpad, and we'll see what has the most votes next week:
https://pad.ceph.com/p/s
Josh
___
The LRC upgraded with no problems, so this release is good to go!
Josh
On Mon, Apr 3, 2023 at 3:36 PM Yuri Weinstein wrote:
> Josh, the release is ready for your review and approval.
>
> Adam, can you please update the LRC upgrade to 17.2.6 RC?
>
> Thx
>
>
> On Wed, Mar 29, 2023 at 3:07 PM Yuri
Looks good to go!
On Tue, Jan 24, 2023 at 7:57 AM Yuri Weinstein wrote:
> Josh, this is ready for your final review/approval and publishing
>
> Release notes - https://github.com/ceph/ceph/pull/49839
>
> On Tue, Jan 24, 2023 at 4:00 AM Venky Shankar wrote:
> >
> > On Mon, Jan 23, 2023 at 11:22
Here's a link since the attachment didn't come through:
https://github.com/jdurgin/ceph.io/raw/wip-virtual-2022-slides/src/assets/pdfs/2022.11-state-of-the-cephalopod.pdf
On Thu, Nov 3, 2022 at 8:44 AM Josh Durgin wrote:
>
> As mentioned at Ceph Virtual today, here are the
As mentioned at Ceph Virtual today, here are the slides from the
project update. The recording will be posted to the Ceph youtube
channel later.
Thanks to everyone contributing to and using Ceph, you make this all possible!
Josh
___
ceph-users mailing l
pushing the boundaries of storage!
Ceph Executive Council
Neha Ojha, Josh Durgin, Dan van der Ster
[1]
https://newsroom.ibm.com/2022-10-04-IBM-Redefines-Hybrid-Cloud-Application-and-Data-Storage-Adding-Red-Hat-Storage-to-IBM-Offerings
___
ceph-users
Great, let's release the final octopus!
On Tue, Aug 2, 2022 at 8:00 AM Yuri Weinstein wrote:
> Greg,
>
> https://github.com/ceph/ceph/pull/47236 was tested and merged.
>
> Josh, David, unless there are any objections, this is ready for publishing!
>
> On Tue, Jul 26, 2022 at 3:16 PM Gregory Farn
On Sun, Jul 24, 2022 at 8:33 AM Yuri Weinstein wrote:
> Still seeking approvals for:
>
> rados - Travis, Ernesto, Adam
> rgw - Casey
> fs, kcephfs, multimds - Venky, Patrick
> ceph-ansible - Brad pls take a look
>
> Josh, upgrade/client-upgrade-nautilus-octopus failed, do we need to fix
> it, pls
On Wed., Jun. 22, 2022, 15:44 Yuri Weinstein, wrote:
>
> We did not get approvals for dashboard and rook, but we also did not get
> disapproval :)
>
> Josh, David it's ready for publishing assuming you agree.
>
Sounds ready to me!
> On Wed, Jun 22, 2022 at 3:26 PM Neha Ojha wrote:
>
>> On Wed
Hi Venky and Ernesto, how are the mount fix and grafana container build
looking?
Josh
On Fri, Apr 1, 2022 at 8:22 AM Venky Shankar wrote:
> On Thu, Mar 31, 2022 at 8:51 PM Venky Shankar wrote:
> >
> > Hi Yuri,
> >
> > On Wed, Mar 30, 2022 at 11:24 PM Yuri Weinstein
> wrote:
> > >
> > > We me
This is the first release candidate for Quincy. The final release is slated
for the end of March.
This release has been through large-scale testing thanks to several
organizations, including Pawsey Supercomputing Centre, who allowed us to
harden cephadm and the ceph dashboard on their 4000-OSD clu
Hi folks,
As we near the end of the Quincy cycle, it's time to choose a name for
the next release.
This etherpad began a while ago, so there are some votes already,
however we wanted to open it up for anyone who hasn't voted yet. Add
your +1 to the name you prefer here, or add a new option:
On 12/6/21 12:49, Yuri Weinstein wrote:
We merged 3 PRs on top of the RC1 tip:
https://github.com/ceph/ceph/pull/44164
https://github.com/ceph/ceph/pull/44154
https://github.com/ceph/ceph/pull/44201
Assuming that Neha or other leads see any point to retest any suites,
this is ready for publishi
On 10/15/21 08:53, Josh Durgin wrote:
Hello folks, over the past few weeks the Ceph Leadership Team
has been processing Sage's departure and figuring out how to run the
project going forward.
We all appreciate Sage's leadership over the past 17 years, and will
dearly miss him. He did
Hello folks, over the past few weeks the Ceph Leadership Team
has been processing Sage's departure and figuring out how to run the
project going forward.
We all appreciate Sage's leadership over the past 17 years, and will
dearly miss him. He did give us a bit of a trial run last year, with
his l
Thanks so much Sage, it's difficult to put into words how much you've
done over the years. You're always a beacon of the best aspects of open
source - kindness, wisdom, transparency, and authenticity. So many folks
have learned so much from you, and that's reflected in the vibrant Ceph
community a
On 4/21/21 9:29 AM, Josh Baergen wrote:
Hey Josh,
Thanks for the info!
With respect to reservations, it seems like an oversight that
we don't reserve other shards for backfilling. We reserve all
shards for recovery [0].
Very interesting that there is a reservation difference between
backfill
Hey Josh, adding the dev list where you may get more input.
Generally I think your analysis is correct about the current behavior.
In particular if another copy of a shard is available, backfill or
recovery will read from just that copy, not the rest of the OSDs.
Otherwise, k shards must be rea
There are just a couple remaining issues before the final release.
Please test it out and report any bugs.
The full release notes are in progress here [0].
Notable Changes
---
* New ``bluestore_rocksdb_options_annex`` config
parameter. Complements ``bluestore_rocksdb_options`` and
Thansk for the reports - sorry we missed this.
It's safe to ignore the import error - it's for static type checking in
python.
https://tracker.ceph.com/issues/49762
We'll release this fix next week.
Josh
On 3/12/21 10:43 AM, Stefan Kooman wrote:
On 3/12/21 6:18 PM, David Caro wrote:
I got
: this is great news and I look forward to using this
once Pacific is released.
Cheers
On 16/02/2021 00:43, Josh Durgin wrote:
Hello Loic!
We have developed a strategy in pacific - reducing the min_alloc_size
for HDD to 4KB by default.
Igor Fedotov did a lot of investigation and benchmarking, a
Hello Loic!
We have developed a strategy in pacific - reducing the min_alloc_size
for HDD to 4KB by default.
Igor Fedotov did a lot of investigation and benchmarking, and came up
with some improvements to bluestore [1][2] to make this change have
little performance impact (it even increases pe
We’re happy to announce the availability of the third Octopus stable
release series. This release mainly is a workaround for a potential OSD
corruption in v15.2.2. We advise users to upgrade to v15.2.3 directly.
For users running v15.2.2 please execute the following::
ceph config set osd blue
was a significant
delay)? I'm glad the tempfix is being put into place in short-order,
thank you for the expedient turnaround and understanding.
On Thu, May 28, 2020 at 3:03 PM Josh Durgin <mailto:jdur...@redhat.com>> wrote:
Hi Paul, we're planning to release 15.2.
>
Tel: +49 89 1896585 90
On Wed, May 20, 2020 at 7:18 PM Josh Durgin <mailto:jdur...@redhat.com>> wrote:
Hi folks, at this time we recommend pausing OSD upgrades to 15.2.2.
There have been a couple reports of OSDs crashing due to rocksdb
corruption after upgrading to 15.
Hi folks, at this time we recommend pausing OSD upgrades to 15.2.2.
There have been a couple reports of OSDs crashing due to rocksdb
corruption after upgrading to 15.2.2 [1] [2]. It's safe to upgrade
monitors and mgr, but OSDs and everything else should wait.
We're investigating and will get a f
34 matches
Mail list logo