Hi,
I am trying to understand how the PG peering count could increase between
two OSDMAP epochs when none of the OSDs went down or up, nor were there
changes in their weights.
This behaviour was seen on a Hammer cluster, but I guess it should be no
different from the recent releases.
I'm pasting
Hello,
How frequently do RBD device names get reused? For instance, when I map a
volume on a client and it gets mapped to /dev/rbd0 and when it is unmapped,
does a subsequent map reuse this name right away?
I ask this question, because in our use case, we try to unmap a volume and
we are thinking
Hello,
Has the resolution for this issue been released in Nautilus?
I'm still experiencing this on 14.2.9 though I noticed the PR
(https://github.com/ceph/ceph/pull/33978) seemed to be merged in.
Thanks!
-Garrett
___
ceph-users mailing list -- ceph-user
> On Apr 17, 2020, at 9:38 AM, Katarzyna Myrek wrote:
>
> Hi Eric,
>
> Would it be possible to use it with an older cluster version (like
> running new radosgw-admin in the container, connecting to the cluster
> on 14.2.X)?
>
> Kind regards / Pozdrawiam,
> Katarzyna Myrek
I did mention the nau
> On Apr 16, 2020, at 1:58 PM, EDH - Manuel Rios
> wrote:
>
> Hi Eric,
>
> Are there any ETA for get those script backported maybe in 14.2.10?
>
> Regards
> Manuel
There is a nautilus backport PR where the code works. It’s waiting on the added
testing to be complete on master, so that can
Was able to do what I needed to do.
Thank you,
Mathew
Sent with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, April 16, 2020 4:40 AM, Janne Johansson
wrote:
> Den ons 15 apr. 2020 kl 21:01 skrev Mathew Snyder
> :
>
>> I'm running into a prob
Hi Sebastian, of course! I misspelled the option. Sometimes it’s difficult to see the forest for the trees… But after upgrade to 15.2.1 I have now the CEPHADM_STRAY_HOST problem: HEALTH_WARN 3 stray host(s) with 15 daemon(s) not managed by cephadm[WRN] CEPHADM_STRAY_HOST: 3 stray host(s) with 15 da
Hi Eric,
Would it be possible to use it with an older cluster version (like
running new radosgw-admin in the container, connecting to the cluster
on 14.2.X)?
Kind regards / Pozdrawiam,
Katarzyna Myrek
czw., 16 kwi 2020 o 19:58 EDH - Manuel Rios
napisał(a):
>
> Hi Eric,
>
>
>
> Are there any ET
Understood! I really appreciate your explanation.
Yan, Zheng 于2020年4月17日周五 下午3:11写道:
>
> On Fri, Apr 17, 2020 at 10:23 AM Xinying Song
> wrote:
> >
> > Hi, Yan:
> > I agree with the idea that log event can be used to reconstruct cache
> > when crash happens. But master can reconstruct its cache
On Thu, Apr 16, 2020 at 3:27 PM Dan van der Ster wrote:
>
> On Thu, Apr 16, 2020 at 3:53 AM Yan, Zheng wrote:
> >
> > On Thu, Apr 16, 2020 at 12:15 AM Dan van der Ster
> > wrote:
> > >
> > > On Wed, Apr 15, 2020 at 5:13 PM Yan, Zheng wrote:
> > > >
> > > > On Wed, Apr 15, 2020 at 2:33 AM Dan v
Hi,
sorry i didn't wrote very clear, what i ment was... In the workflow of
* systemctl stop ceph-osd@$ID
* umount /var/lib/ceph/osd/ceph-$ID
* cephadm adopt --style legacy --name osd.$ID
You also need to ```systemctl start ceph-$CLUSTERID@osd.$ID```
After a reboot, my osds are fine and up. I d
On Fri, Apr 17, 2020 at 10:23 AM Xinying Song wrote:
>
> Hi, Yan:
> I agree with the idea that log event can be used to reconstruct cache
> when crash happens. But master can reconstruct its cache by replaying
> its EUpdate logevent. The ESlaveUpdate::OP_COMMIT log event seems to
> have nothing to
12 matches
Mail list logo