Yes but those 'changes' can be relayed via the kernel rbd driver not?
Besides I don't think you can move a rbd block device being used to a
different pool anyway.
On the manual page [0] there is nothing mentioned about configuration
settings needed for rbd use. Nor for ssd. They are also usi
Den mån 6 maj 2019 kl 10:03 skrev Marc Roos :
>
> Yes but those 'changes' can be relayed via the kernel rbd driver not?
> Besides I don't think you can move a rbd block device being used to a
> different pool anyway.
>
>
No, but you can move the whole pool, which takes all RBD images with it.
>
Hi all,
I've problem in deploying mimic using ceph-ansible at following step:
-- cut here ---
TASK [ceph-mon : collect admin and bootstrap keys] *
Monday 06 May 2019 17:01:23 +0800 (0:00:00.854) 0:05:38.899
fatal: [cphmon3a]: FAILED!
I restarted the mds process which was in "up:stopping" state.
Since then there are no trimmings behind any more.
All (sub)directories are accessible as normal again.
It seems there are stability issues with snapshots in a multi-mds cephfs on
nautilus.
This has already been suspected here:
http://
Hi all,
I am also switching osds to the new bitmap allocater on 13.2.5. That
went quite fluently for now, except for one OSD that keeps segfaulting
when I enable the bitmap allocator. Each time I disable bitmap allocater
on that again, osd is ok again. Segfault error of the OSD:
--- begin
Hi Kenneth,
mimic 13.2.5 has previous version of bitmap allocator which isn't
recommended to use. Please revert.
New bitmap allocator is will be available since 13.2.6.
Thanks,
Igor
On 5/6/2019 4:19 PM, Kenneth Waegeman wrote:
Hi all,
I am also switching osds to the new bitmap allocater
Hello all, I'm testing out a new cluster that we hope to put into
production soon. Performance has overall been great, but there's one
benchmark that not only stresses the cluster, but causes it to degrade
- async randwrites.
The benchmark:
# The file was previously laid out with dd'd random data
you mention the version of ansible, that is right. How about the branch of
ceph-ansible? should be 3.2-stable, what OS? I haven't come across this
problem myself, a lot of other ones.
On Mon, May 6, 2019 at 3:47 AM ST Wong (ITSC) wrote:
> Hi all,
>
>
>
> I’ve problem in deploying mimic usi
But what happens if the guest os has trim enabled and qemu did not have
the discard option set. Should there be done some fsck to correct this?
(Sorry is getting a bit off topic here.)
-Original Message-
From: Jason Dillaman [mailto:jdill...@redhat.com]
Sent: woensdag 1 mei 2019 23:3
The reason why you moved to ceph storage, is that you do not want to do such
things. Remove the drive, and let ceph recover.
On May 6, 2019 11:06 PM, Florent B wrote: > > Hi, > > It seems that OSD disk is
dead (hardware problem), badblocks command > returns a lot of badblocks. > > Is
there an
Hello,
I need to install Ceph nautilus from local repository, I did download all
the packages from Ceph site and created a local repository on the servers
also servers don't have internet access, but whenever I try to install Ceph
it tries to install the EPEL release then the installation was f
Thanks guys!
Sage found the issue preventing PG removal from working right and that
is going through testing now and should make the next Nautilus
release. https://github.com/ceph/ceph/pull/27929
Apparently the device health metrics is doing something slightly
naughty, so hopefully that's easy to
yes, we’re using 3.2 stable, on RHEL 7. Thanks.
From: solarflow99
Sent: Tuesday, May 7, 2019 1:40 AM
To: ST Wong (ITSC)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-create-keys loops
you mention the version of ansible, that is right. How about the branch of
ceph-ansible? sh
What's the output of "ceph -s" and "ceph osd tree"?
On Fri, May 3, 2019 at 8:58 AM Stefan Kooman wrote:
>
> Hi List,
>
> I'm playing around with CRUSH rules and device classes and I'm puzzled
> if it's working correctly. Platform specifics: Ubuntu Bionic with Ceph 14.2.1
>
> I created two new dev
Hmm, I didn't know we had this functionality before. It looks to be
changing quite a lot at the moment, so be aware this will likely
require reconfiguring later.
On Sun, May 5, 2019 at 10:40 AM Kyle Brantley wrote:
>
> I've been running luminous / ceph-12.2.11-0.el7.x86_64 on CentOS 7 for about
On 5/6/2019 6:37 PM, Gregory Farnum wrote:
Hmm, I didn't know we had this functionality before. It looks to be
changing quite a lot at the moment, so be aware this will likely
require reconfiguring later.
Good to know, and not a problem. In any case, I'd assume it won't change
substantially fo
Some more information:
Rpm installed on MONS:
python-cephfs-13.2.5-0.el7.x86_64
ceph-base-13.2.5-0.el7.x86_64
libcephfs2-13.2.5-0.el7.x86_64
ceph-common-13.2.5-0.el7.x86_64
ceph-selinux-13.2.5-0.el7.x86_64
ceph-mon-13.2.5-0.el7.x86_64
Doing a mon_status gives following. Only the local host runni
Quoting Gregory Farnum (gfar...@redhat.com):
> What's the output of "ceph -s" and "ceph osd tree"?
ceph -s
cluster:
id: 40003df8-c64c-5ddb-9fb6-d62b94b47ecf
health: HEALTH_OK
services:
mon: 3 daemons, quorum michelmon1,michelmon2,michelmon3 (age 2d)
mgr: michelmon2(active
18 matches
Mail list logo