We are using luminous, we have seven ceph nodes and setup them all as MDS.
Recently the MDS lost very frequently, and when there is only one MDS left, the
cephfs just degraded to unusable.
Checked the mds log in one ceph node, I found below
>
/b
?Ok, I'll try these params. thx!
От: Maged Mokhtar
Отправлено: 12 декабря 2018 г. 10:51
Кому: Klimenko, Roman; ceph-users@lists.ceph.com
Тема: Re: [ceph-users] ceph pg backfill_toofull
There are 2 relevant params
mon_osd_full_ratio 0.95
osd_backfill_full_
Hi
Than means, the 'mv' operation should be done if src and dst
are in the same pool, and the client should have same permission
on both src and dst.
Do I have the right understanding?
Yes.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
That did the trick, we had it set to 0 just on the swift rgw definitions
although it was set on other rgw services, I'm guessing someone must have
thought there was a different precedence in play in the past.
On Tue, 2018-12-11 at 11:41 -0500, Casey Bodley wrote:
Hi Leon,
Are you running with
Hi
Thanks for the explanation.
I did a test few moments ago. Everything goes just like what I expect.
Thanks for your helps :)
Konstantin Shalygin 于2018年12月12日周三 下午4:57写道:
> Hi
>
> Than means, the 'mv' operation should be done if src and dst
> are in the same pool, and the client should have
Hi Daniel, thanks for looking at this.
These are the mount options
type nfs4
(rw,nodev,relatime,vers=4,intr,local_lock=none,retrans=2,proto=tcp,rsize
=8192,wsize=8192,hard,namlen=255,sec=sys)
I have overwritten the original files, so I cannot examine if they had
holes. To be honest I don't
Thank you all for your input.
My best guess at the moment is that deep-scrub performs as it should, and
the issue is that it just has no limits on its performance, so it uses all
the OSD time it can. Even if it has lower priority than client IO, it still
can fill disk queue, and effectively bottlen
Greq, for example for our cluster ~1000 osd:
size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860,
modified 2018-12-12 04:00:17.661731)
size osdmap.1357882__0_F7FE772D__none = 363KB
size osdmap.1357883__0_F7FE74FD__none = 363KB (crush_version 9861,
modified 2018-12-12 04:00:27.385702)
On Tue, Dec 11, 2018 at 8:16 PM Mark Kirkwood
wrote:
>
> Looks like the 'delaylog' option for xfs is the problem - no longer supported
> in later kernels. See
> https://github.com/torvalds/linux/commit/444a702231412e82fb1c09679adc159301e9242c
>
> Offhand I'm not sure where that option is being a
On Tue, Dec 11, 2018 at 7:28 PM Tyler Bishop
wrote:
>
> Now I'm just trying to figure out how to create filestore in Luminous.
> I've read every doc and tried every flag but I keep ending up with
> either a data LV of 100% on the VG or a bunch fo random errors for
> unsupported flags...
An LV wit
Hi Jeff
Many thanks for this! Looking forward to testing it out.
Could you elaborate a bit on why Nautilus is recommended for this set-up
please. Would attempting this with a Luminous cluster be a non-starter?
On Wed, 12 Dec 2018, 12:16 Jeff Layton (Sorry for the duplicate email to ganesha li
Hi,
We are using Luminous and copying a 100TB RBD image to DR site using RBD
Mirror.
Everything seems to works fine.
The question is, can we mount the DR copy as Read-Only? We can do it on
Netapp and we are trying to figure out if somehow we can mount it RO on DR
site, then we can do backups at
On 12/12/18 4:44 PM, Vikas Rana wrote:
> Hi,
>
> We are using Luminous and copying a 100TB RBD image to DR site using RBD
> Mirror.
>
> Everything seems to works fine.
>
> The question is, can we mount the DR copy as Read-Only? We can do it on
> Netapp and we are trying to figure out if someh
Okay, this all looks fine, and it's extremely unlikely that a text file
will have holes in it (I thought holes, because rsync handles holes, but
wget would just copy zeros instead).
Is this reproducible? If so, can you turn up Ganesha logging and post a
log file somewhere?
Daniel
On 12/12/
Hey Abhishek,
We just noticed that the debuginfo is missing for 12.2.10:
http://download.ceph.com/rpm-luminous/el7/x86_64/ceph-debuginfo-12.2.10-0.el7.x86_64.rpm
Did something break in the publishing?
Cheers, Dan
On Tue, Nov 27, 2018 at 3:50 PM Abhishek Lekshmanan wrote:
>
>
> We're happy to a
Hey Dan,
Thanks for bringing this to our attention. Looks like it did get left
out. I just pushed the package and added a step to the release process
to make sure packages don't get skipped again like that.
- David
On 12/12/2018 11:03 AM, Dan van der Ster wrote:
> Hey Abhishek,
>
> We just no
In such a situation, we noticed a performance drop (caused by the
filesystem) and soon had no free inodes left.
___
Clyso GmbH
Am 12.12.2018 um 09:24 schrieb Klimenko, Roman:
Ok, I'll try these params. thx!
---
To give more output. This is XFS FS.
root@vtier-node1:~# rbd-nbd --read-only map testm-pool/test01
2018-12-12 13:04:56.674818 7f1c56e29dc0 -1 asok(0x560b19b3bdf0)
AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
bind the UNIX domain socket to '/var/run/ceph/ceph-client.a
When i promote the DR image, I could mount it fine
root@vtier-node1:~# rbd mirror image promote testm-pool/test01 --force
Image promoted to primary
root@vtier-node1:~#
root@vtier-node1:~# mount /dev/nbd0 /mnt
mount: block device /dev/nbd0 is write-protected, mounting read-only
On Wed, Dec 12, 20
Hmm that does seem odd. How are you looking at those sizes?
On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov wrote:
> Greq, for example for our cluster ~1000 osd:
>
> size osdmap.1357881__0_F7FE779D__none = 363KB (crush_version 9860,
> modified 2018-12-12 04:00:17.661731)
> size osdmap.1357882__0_F
Those are sizes in file system. I use filestore as a backend
On Wed, Dec 12, 2018, 22:53 Gregory Farnum Hmm that does seem odd. How are you looking at those sizes?
>
> On Wed, Dec 12, 2018 at 4:38 AM Sergey Dolgov wrote:
>
>> Greq, for example for our cluster ~1000 osd:
>>
>> size osdmap.1357881
Hi all,
I have a cluster used exclusively for cephfs (A EC "media" pool, and a standard
metadata pool for the cephfs).
"ceph -s" shows me:
---
data:
pools: 2 pools, 260 pgs
objects: 37.18 M objects, 141 TiB
usage: 177 TiB used, 114 TiB / 291 TiB avail
pgs: 260 active+c
Safest to just 'osd crush reweight osd.X 0' and let rebalancing finish.
Then 'osd out X' and shutdown/remove osd drive.
On 2018-12-04 03:15, Jarek wrote:
On Mon, 03 Dec 2018 16:41:36 +0100
si...@turka.nl wrote:
Hi,
Currently I am decommissioning an old cluster.
For example, I want to remo
Hello,
Do you see the cause of the logged errors?
I can't find any documentation about that, so I'm stuck.
I really need a help.
Thanks everybody
Marco
Il giorno ven 7 dic 2018, 17:30 Marco Aroldi ha
scritto:
> Thanks Greg,
> Yes, I'm using CephFS and RGW (mainly CephFS)
> The files are still a
Hello collective wisdom,
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
here.
I have a working cluster here consisting of 3 monitor hosts, 64 OSD processes
across 4 osd hosts, plus 2 MDSs, plus 2 MGRs. All of that is consumed by 10
client nodes.
Every host in t
On Thu, Dec 13, 2018 at 2:55 AM Sang, Oliver wrote:
>
> We are using luminous, we have seven ceph nodes and setup them all as MDS.
>
> Recently the MDS lost very frequently, and when there is only one MDS left,
> the cephfs just degraded to unusable.
>
>
>
> Checked the mds log in one ceph node,
I have a Mimic Bluestore EC RBD Pool running on 8+2, this is currently
running across 4 node's.
3 Node's are running Toshiba disk's while one node is running Segate disks
(same size, spinning speed, enterprise disks e.t.c), I have noticed huge
difference in IOWAIT and disk latency performance betw
27 matches
Mail list logo