please set debug_mds=10, and try again
On Tue, Apr 2, 2019 at 1:01 PM Albert Yue wrote:
>
> Hi,
>
> This happens after we restart the active MDS, and somehow the standby MDS
> daemon cannot take over successfully and is stuck at up:replaying. It is
> showing the following log. Any idea on how t
On 31/03/2019 17.56, Christian Balzer wrote:
Am I correct that unlike with with replication there isn't a maximum size
of the critical path OSDs?
As far as I know, the math for calculating the probability of data loss
wrt placement groups is the same for EC and for replication. Replication
to
Hi,
on one of my clusters, I'm getting error message which is getting
me a bit nervous.. while listing contents of a pool I'm getting
error for one of images:
[root@node1 ~]# rbd ls -l nvme > /dev/null
rbd: error processing image xxx: (2) No such file or directory
[root@node1 ~]# rbd info nvme/
Hi,
we are about to setup a new Ceph cluster for our Openstack cloud. Ceph
is used for images, volumes and object storage. I'm unsure to handle
these cases and how to move the data correctly.
Object storage:
I consider this the easiest case, since RGW itself provides the
necessary means t
Hello Hector,
Firstly I'm so happy somebody actually replied.
On Tue, 2 Apr 2019 16:43:10 +0900 Hector Martin wrote:
> On 31/03/2019 17.56, Christian Balzer wrote:
> > Am I correct that unlike with with replication there isn't a maximum size
> > of the critical path OSDs?
>
> As far as I kn
Quoting Burkhard Linke (burkhard.li...@computational.bio.uni-giessen.de):
> Hi,
> Images:
>
> Straight-forward attempt would be exporting all images with qemu-img from
> one cluster, and uploading them again on the second cluster. But this will
> break snapshots, protections etc.
You can use rbd-
On 02/04/2019 18.27, Christian Balzer wrote:
I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2
pool with 1024 PGs.
(20 choose 2) is 190, so you're never going to have more than that many
unique sets of OSDs.
I just looked at the OSD distribution for a replica 3 pool ac
Quoting Stadsnet (jwil...@stads.net):
> On 26-3-2019 16:39, Ashley Merrick wrote:
> >Have you upgraded any OSD's?
>
>
> No didn't go through with the osd's
Just checking here: are your sure all PGs have been scrubbed while
running Luminous? As the release notes [1] mention this:
"If you are uns
Hi!
Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
There's also some metadata overhead etc. You might want to consider
enabling inline data in cephfs to handle small files in a
store-efficient way (note that this feature is officially marked as
experimental, though).
http://docs.ceph.com/docs/mas
On Tue, Apr 2, 2019 at 4:19 AM Nikola Ciprich
wrote:
>
> Hi,
>
> on one of my clusters, I'm getting error message which is getting
> me a bit nervous.. while listing contents of a pool I'm getting
> error for one of images:
>
> [root@node1 ~]# rbd ls -l nvme > /dev/null
> rbd: error processing ima
Hi,
If you run "rbd snap ls --all", you should see a snapshot in
the "trash" namespace.
I just tried the command "rbd snap ls --all" on a lab cluster
(nautilus) and get this error:
ceph-2:~ # rbd snap ls --all
rbd: image name was not specified
Are there any requirements I haven't noticed?
On Tue, Apr 2, 2019 at 8:42 AM Eugen Block wrote:
>
> Hi,
>
> > If you run "rbd snap ls --all", you should see a snapshot in
> > the "trash" namespace.
>
> I just tried the command "rbd snap ls --all" on a lab cluster
> (nautilus) and get this error:
>
> ceph-2:~ # rbd snap ls --all
> rbd: image n
On Tue, Apr 2, 2019 at 8:23 PM Clausen, Jörn wrote:
>
> Hi!
>
> Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
> > There's also some metadata overhead etc. You might want to consider
> > enabling inline data in cephfs to handle small files in a
> > store-efficient way (note that this feature is off
On Tue, Apr 2, 2019 at 9:05 PM Yan, Zheng wrote:
>
> On Tue, Apr 2, 2019 at 8:23 PM Clausen, Jörn wrote:
> >
> > Hi!
> >
> > Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
> > > There's also some metadata overhead etc. You might want to consider
> > > enabling inline data in cephfs to handle small
On Tue, Apr 2, 2019 at 3:05 PM Yan, Zheng wrote:
>
> On Tue, Apr 2, 2019 at 8:23 PM Clausen, Jörn wrote:
> >
> > Hi!
> >
> > Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
> > > There's also some metadata overhead etc. You might want to consider
> > > enabling inline data in cephfs to handle small
Sorry -- you need the "" as part of that command.
My bad, I only read this from the help page ignoring the
(and forgot the pool name):
-a [ --all ] list snapshots from all namespaces
I figured this would list all existing snapshots, similar to the "rbd
-p ls --long" command. T
Hello,
I haven't had any issues either with 4k allocation size in cluster
holding 358M objects for 116TB (237TB raw) and 2.264B chunks/replicas.
This is an average of 324k per object and 12.6M of chunks/replicas per
OSD with RocksDB sizes going from 12.1GB to 21.14GB depending on how
much PG
On 02/04/2019 15.05, Yan, Zheng wrote:
> I don't use this feature. We don't have plan to mark this feature
> stable. (probably we will remove this feature in the furthure).
Oh no! We have activated inline_data since our cluster does have lots of small
files (but also big ones), and
performance i
On Tue, Apr 2, 2019 at 9:10 PM Paul Emmerich wrote:
>
> On Tue, Apr 2, 2019 at 3:05 PM Yan, Zheng wrote:
> >
> > On Tue, Apr 2, 2019 at 8:23 PM Clausen, Jörn wrote:
> > >
> > > Hi!
> > >
> > > Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
> > > > There's also some metadata overhead etc. You migh
Op 2-4-2019 om 12:16 schreef Stefan Kooman:
Quoting Stadsnet (jwil...@stads.net):
On 26-3-2019 16:39, Ashley Merrick wrote:
Have you upgraded any OSD's?
No didn't go through with the osd's
Just checking here: are your sure all PGs have been scrubbed while
running Luminous? As the release not
This also happened sometimes during a Luminous -> Mimic upgrade due to
a bug in Luminous; however I thought it was fixed on the ceph-mgr
side.
Maybe the fix was (also) required in the OSDs and you are seeing this
because the running OSDs have that bug?
Anyways, it's harmless and you can ignore it.
On Tue, 2 Apr 2019 19:04:28 +0900 Hector Martin wrote:
> On 02/04/2019 18.27, Christian Balzer wrote:
> > I did a quick peek at my test cluster (20 OSDs, 5 hosts) and a replica 2
> > pool with 1024 PGs.
>
> (20 choose 2) is 190, so you're never going to have more than that many
> unique sets o
Looks like http://tracker.ceph.com/issues/37399. which version of
ceph-mds do you use?
On Tue, Apr 2, 2019 at 7:47 AM Sergey Malinin wrote:
>
> These steps pretty well correspond to
> http://docs.ceph.com/docs/mimic/cephfs/disaster-recovery/
> Were you able to replay journal manually with no iss
Hello Ceph Users,
I am finding that the write latency across my ceph clusters isn't great and I
wanted to see what other people are getting for op_w_latency. Generally I am
getting 70-110ms latency.
I am using: ceph --admin-daemon /var/run/ceph/ceph-osd.102.asok perf dump | grep -A3
'\"op_w_l
Thanks for the updated command – much cleaner!
The OSD nodes have a single 6core X5650 @ 2.67GHz, 72GB GB and around 8x10TB
HDD OSD/ 4 x 2TB SSD OSD. Cpu usage is around 20% and the ram has 22GB
available.
The 3 MON nodes are the same but with no OSDs
The cluster has around 150 drives and only d
Quoting Paul Emmerich (paul.emmer...@croit.io):
> This also happened sometimes during a Luminous -> Mimic upgrade due to
> a bug in Luminous; however I thought it was fixed on the ceph-mgr
> side.
> Maybe the fix was (also) required in the OSDs and you are seeing this
> because the running OSDs hav
26 matches
Mail list logo