Hi,
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
> Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
> it just "this is crazy large, if you're trying to go over this you're
> doing something wrong, rethink your life..."?
IMHO the limit is there because of the way deletion of
I have hell of the question: how to make HEALTH_ERR status for a cluster
without consequences?
I'm working on CI tests and I need to check if our reaction to
HEALTH_ERR is good. For this I need to take an empty cluster with an
empty pool and do something. Preferably quick and reversible.
For
Hi,
For HEALTH_WARN the best thing I found is to change pool size to 1,
it raises "1 pool(s) have no replicas configured" warning almost
instantly and it can be reverted very quickly for empty pool.
any osd flag (noout, nodeep-scrub etc.) cause health warnings. ;-)
But HEALTH_ERR is a bit
On 21/01/2021 13:02, Eugen Block wrote:
But HEALTH_ERR is a bit more tricky. Any ideas?
I think if you set a very low quota for a pool (e.g. 1000 bytes or so)
and fill it up it should create a HEALTH_ERR status, IIRC.
Cool idea. Unfortunately, even with 1 byte quota (and some data in the
poo
On 21/01/2021 12:57, George Shuklin wrote:
I have hell of the question: how to make HEALTH_ERR status for a
cluster without consequences?
I'm working on CI tests and I need to check if our reaction to
HEALTH_ERR is good. For this I need to take an empty cluster with an
empty pool and do somet
Oh really, I thought it would be an error. My bad.
There was an osd flag "full" which is not usable anymore, I never used
it so I just tried it with full OSD which should lead to an error (and
it does):
host:~ # ceph -s
cluster:
id: 8f279f36-811c-3270-9f9d-58335b1bb9c0
health:
I have a rbd-mirror snapshot on 1 image that failed to replicate and now its
not getting cleaned up.
The cause of this was my fault based on my steps. Just trying to understand how
to clean up/handle the situation.
Here is how I got into this situation.
- Created manual rbd snapshot on the
Oh that's better, I had to recreate my OSD because it didn't want to
start anymore :-D
Zitat von George Shuklin :
On 21/01/2021 12:57, George Shuklin wrote:
I have hell of the question: how to make HEALTH_ERR status for a
cluster without consequences?
I'm working on CI tests and I need t
When cloning the snapshot on the remote cluster I can't see my ext4 filesystem.
Using the same exact snapshot on both sides. Shouldn't this be consistent?
Primary Site
root@Ccscephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep
TestSnapper1
10621 TestSnapper1 2 TiB Thu Jan 21 0
Decided to request a resync to see the results, I have a very aggressive
snapshot mirror schedule of 5 minutes, replication just keeps starting on the
latest snapshot before it finishes. Pretty sure this would just loop over and
over if I don't remove the schedule.
root@Ccscephtest1:~# rbd sna
Does Ceph now supports volume group of RBDs? From which version if any?
regards,
samuel
huxia...@horebdata.cn
From: Robert Sander
Date: 2021-01-21 10:57
To: ceph-users
Subject: [ceph-users] Re: Large rbd
Hi,
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
> Is there any particular reason for
On Thu, Jan 21, 2021 at 8:34 AM Adam Boyhan wrote:
>
> When cloning the snapshot on the remote cluster I can't see my ext4
> filesystem.
>
> Using the same exact snapshot on both sides. Shouldn't this be consistent?
Yes. Has the replication process completed ("rbd mirror image status
CephTestPo
We actually have a bunch of bug fixes for snapshot-based mirroring
pending for the next Octopus release. I think this stuck snapshot case
has been fixed, but I'll try to verify on the pacific branch to
ensure.
On Thu, Jan 21, 2021 at 9:11 AM Adam Boyhan wrote:
>
> Decided to request a resync to s
Hi,
I think what's being suggested here is to create a good old LVM VG in a
virtualized guest, from multiple RBDs, each accessed as a separate
VirtIO SCSI device.
As each storage device in the LVM VG has its own queues at the VirtIO /
QEMU / RBD interface levels, that would allow for greater
After the resync finished. I can mount it now.
root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
CephTestPool1/vm-100-disk-0-CLONE
root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id admin
--keyring /etc/ceph/ceph.client.admin.keyring
/dev/nbd0
roo
On Thu, Jan 21, 2021 at 9:40 AM Adam Boyhan wrote:
>
> After the resync finished. I can mount it now.
>
> root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
> CephTestPool1/vm-100-disk-0-CLONE
> root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id
> adm
I've always been curious about this. Does anyone have any experience
spanning an LVM VG over multiple RBDs?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
That worked! Thanks!
Now to figure out how to correct all the incorrect OSDs.
On Thu, Jan 21, 2021 at 1:29 AM Eugen Block wrote:
> If you use block_db_size and limit in your yaml file, e.g.
>
> block_db_size: 64G (or whatever you choose)
> limit: 6
>
> this should not consume the entire dis
On Thu, Jan 21, 2021 at 9:47 AM John Petrini wrote:
> I've always been curious about this. Does anyone have any experience
> spanning an LVM VG over multiple RBDs?
>
I do on RHEL, it works very well. Each RDB device has some inherent IO
limitations, but using multiple in parallel works quite we
We do it in production, haven't benchmarked it though if that's what
you're aiming for. General consensus when we started with it was that it
allowed for greater performance (we use librbd kvm).
--
David Majchrzak
CTO
Oderland Webbhotell AB
Östra Hamngatan 50B, 411 09 Göteborg, SWEDEN
Den 202
Hi all,
During rejoin an MDS can sometimes go OOM if the openfiles table is too large.
The workaround has been described by ceph devs as "rados rm -p
cephfs_metadata mds0_openfiles.0".
On our cluster we have several such objects for rank 0:
mds0_openfiles.0 exists with size: 199978
mds0_openfile
I have noticed that RBD-Mirror snapshot mode can only manage to take 1 snapshot
per second. For example I have 21 images in a single pool. When the schedule is
triggered it takes the mirror snapshot of each image 1 at a time. It doesn't
feel or look like a performance issue as the OSD's are Micr
I was able to trigger the issue again.
- On the primary I created a snap called TestSnapper for disk vm-100-disk-1
- Allowed the next RBD-Mirror scheduled snap to complete
- At this point the snapshot is showing up on the remote side.
root@Bunkcephtest1:~# rbd mirror image status CephTestPool
Looks like a script and cron will be a solid work around.
Still interested to know if there are any options to make it so rbd-mirror can
take more than 1 mirror snap per second.
From: "adamb"
To: "ceph-users"
Sent: Thursday, January 21, 2021 11:18:36 AM
Subject: [ceph-users] RBD-Mirror
On Thu, Jan 21, 2021 at 2:00 PM Adam Boyhan wrote:
>
> Looks like a script and cron will be a solid work around.
>
> Still interested to know if there are any options to make it so rbd-mirror
> can take more than 1 mirror snap per second.
>
>
>
> From: "adamb"
> To: "ceph-users"
> Sent: Thursda
Let me just start off by saying, I really appreciate all your input so far. Its
been a huge help!
Even if it can scale to 10-20 per second that would make things far more
viable. Sounds like it shouldn't be much of a issue with the changes you
mentioned.
As it sits we have roughly 1300 (and
On Thu, Jan 21, 2021 at 11:51 AM Adam Boyhan wrote:
>
> I was able to trigger the issue again.
>
> - On the primary I created a snap called TestSnapper for disk vm-100-disk-1
> - Allowed the next RBD-Mirror scheduled snap to complete
> - At this point the snapshot is showing up on the remote side.
Sure thing.
root@Bunkcephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-1
SNAPID NAME SIZE PROTECTED TIMESTAMP NAMESPACE
12192 TestSnapper1 2 TiB Thu Jan 21 14:15:02 2021 user
12595
.mirror.non_primary.a04e92df-3d64-4dc4-8ac8-eaba17b45403.34c4a53e-9525-446c-8de6-409ea93c5edd
2 TiB Thu
Hi !
I respond to the list, as it may help others.
I also reorder the response.
> On Mon, Jan 18, 2021 at 2:41 PM Gilles Mocellin <
>
> gilles.mocel...@nuagelibre.org> wrote:
> > Hello Cephers,
> >
> > On a new cluster, I only have 2 RBD block images, and the Dashboard
> > doesn't manage to lis
On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
Hi,
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
it just "this is crazy large, if you're trying to go over this you're
doing something wrong, rethink your life..
On Thu, Jan 21, 2021 at 6:18 PM Chris Dunlop wrote:
>
> On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
> > Hi,
> >
> > Am 21.01.21 um 05:42 schrieb Chris Dunlop:
> >
> >> Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
> >> it just "this is crazy large, if y
On Thu, Jan 21, 2021 at 07:52:00PM -0500, Jason Dillaman wrote:
On Thu, Jan 21, 2021 at 6:18 PM Chris Dunlop wrote:
On Thu, Jan 21, 2021 at 10:57:49AM +0100, Robert Sander wrote:
Am 21.01.21 um 05:42 schrieb Chris Dunlop:
Is there any particular reason for that MAX_OBJECT_MAP_OBJECT_COUNT, or
32 matches
Mail list logo