On Wed, Jun 15, 2022 at 7:23 AM Venky Shankar wrote:
>
> On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/55974
> > Release Notes - https://github.com/ceph/ceph/pull/46576
> >
> > Seeking approvals
(oops, i had cc'ed this to the old ceph-users list)
On Wed, Jun 15, 2022 at 1:56 PM Casey Bodley wrote:
>
> On Mon, May 11, 2020 at 10:20 AM Abhishek Lekshmanan
> wrote:
> >
> >
> > The basic premise is for an account to be a container for users, and
> > also related functionality like roles &
I have found that I can only reproduce it on clusters built initially on
pacific. My cluster which went nautilus to pacific does not reproduce the
issue. My working theory is it is related to rocksdb sharding:
https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/#rocksdb-shardi
Hi,
"df -h" on the OSD host shows 187G is being used.
"du -sh /" shows 36G. bluefs_buffered_io is enabled here.
What's taking that 150G disk space, cache?
Then where is that cache file? Any way to configure it smaller?
# free -h
totalusedfree shared buff/cache
On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
On Wed, Jun 15, 2022 at 3:21 PM Frank Schilder wrote:
>
> Hi Eugen,
>
> in essence I would like the property "thick provisioned" to be sticky after
> creation and apply to any other operation that would be affected.
>
> To answer the use-case question: this is a disk image on a pool designed for
So basically, you need the reverse sparsify command, right? ;-)
I only find several mailing list thready asking why someone would want
thick-provisioning but it happened eventually. I suppose cloning and
flattening the resulting image is not a desirable workaround.
Zitat von Frank Schilder
On Tue, Jun 14, 2022 at 1:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
On Tue, Jun 14, 2022 at 7:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
Hi *,
I finally caught some debug logs during the cache pressure warnings.
In the meantime I had doubled the mds_cache_memory_limit to 128 GB
which decreased the number cache pressure messages significantly, but
they still appear a few times per day.
Turning on debug logs for a few second
Hi,
I have Ceph Pacific 16.2.9 with CephFS and 4 MDS (2 active, 2 standby-reply)
==
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active mds3 Reqs: 31 /s 162k 159k 69.5k 177k
1 active mds1 Reqs: 4 /s 31.0k 28.7k 10.6k 20.7
Hi all,
while setting up a system with cephadm under Quincy, I bootstrapped from host A, added mons on hosts B
and C, and rebooted host A.
Afterwards, ceph seemed to be in a healthy state (no OSDs yet, of course), but my host A
was "offline".
I was afraid I had run into https://tracker.ceph.
12 matches
Mail list logo