[ceph-users] OSDs get full with bluestore logs

2020-08-17 Thread Khodayar Doustar
Hi, I have a 3 node cluster of mimic with 9 osds (3 osds on each node). I use this cluster to test integration of an application with S3 api. The problem is that after a few days all OSD starts filling up with bluestore logs and goes down and out one by one! I cannot stop the logs and I cannot fi

[ceph-users] Re: Re-run ansible to add monitor and RGWs

2020-06-22 Thread Khodayar Doustar
Thanks a lot, I've run that and that was perfectly ok :) On Mon, Jun 15, 2020 at 5:24 PM Matthew Vernon wrote: > On 14/06/2020 17:07, Khodayar Doustar wrote: > > > Now I want to add the other two nodes as monitor and rgw. > > > > Can I just modify the ansible host

[ceph-users] Re: Fwd: Re-run ansible to add monitor and RGWs

2020-06-15 Thread Khodayar Doustar
-users] Fwd: Re-run ansible to add monitor and RGWs > > Any ideas on this? > > -- Forwarded message - > From: Khodayar Doustar > Date: Sun, Jun 14, 2020 at 6:07 PM > Subject: Re-run ansible to add monitor and RGWs > To: ceph-users > > > Hi, > >

[ceph-users] Fwd: Re-run ansible to add monitor and RGWs

2020-06-15 Thread Khodayar Doustar
Any ideas on this? -- Forwarded message - From: Khodayar Doustar Date: Sun, Jun 14, 2020 at 6:07 PM Subject: Re-run ansible to add monitor and RGWs To: ceph-users Hi, I've installed my ceph cluster with ceph-ansible a few months ago. I've just added one monitor a

[ceph-users] Re-run ansible to add monitor and RGWs

2020-06-14 Thread Khodayar Doustar
Hi, I've installed my ceph cluster with ceph-ansible a few months ago. I've just added one monitor and one rgw at that time. So I have 3 nodes, from which one is monitor and rgw and two others only OSD. Now I want to add the other two nodes as monitor and rgw. Can I just modify the ansible hos

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Khodayar Doustar
So this is your problem, it has nothing to do with Ceph. Just fix the network or rollback all changes. On Sun, May 24, 2020 at 9:05 AM Amudhan P wrote: > No, ping with MTU size 9000 didn't work. > > On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar > wrote: > > >

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-23 Thread Khodayar Doustar
Does your ping work or not? On Sun, May 24, 2020 at 6:53 AM Amudhan P wrote: > Yes, I have set setting on the switch side also. > > On Sat 23 May, 2020, 6:47 PM Khodayar Doustar, > wrote: > >> Problem should be with network. When you change MTU it should be changed &g

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-23 Thread Khodayar Doustar
Problem should be with network. When you change MTU it should be changed all over the network, any single hup on your network should speak and accept 9000 MTU packets. you can check it on your hosts with "ifconfig" command and there is also equivalent commands for other network/security devices. I

[ceph-users] Re: Aging in S3 or Moving old data to slow OSDs

2020-05-20 Thread Khodayar Doustar
) that policies are applied at the bucket > level, so you would need a second bucket. > > Cheers, > Tom > > On Tue, May 19, 2020 at 11:54 PM Khodayar Doustar > wrote: > >> Hi, >> >> I'm using Nautilus and I'm using the whole cluster mainly for a

[ceph-users] Re: Aging in S3 or Moving old data to slow OSDs

2020-05-20 Thread Khodayar Doustar
Anyone knows anything about this? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Aging in S3 or Moving old data to slow OSDs

2020-05-19 Thread Khodayar Doustar
Hi, I'm using Nautilus and I'm using the whole cluster mainly for a single bucket in RadosGW. There is a lot of data in this bucket (Petabyte scale) and I don't want to waste all of SSD on it. Is there anyway to automatically set some aging threshold for this data and e.g. move any data older than

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Khodayar Doustar
epel: pkg.adfinis-sygroup.ch > > * extras: pkg.adfinis-sygroup.ch > > * updates: pkg.adfinis-sygroup.ch > > Warning: No matches found for: python3-cherrypy > > No matches found > > > > > > But as you can see, it cannot find it. > > > > Anything

[ceph-users] Re: Upgrading to Octopus

2020-04-23 Thread Khodayar Doustar
t-simple.noarch 0.2.0-1.el7epel > python36-jwt.noarch1.6.4-2.el7epel > > > How do I get either: The right packages or a workaround because i can > install the dependencies with pip? > > > Regards, > > Sim

[ceph-users] Re: Upgrading to Octopus

2020-04-22 Thread Khodayar Doustar
Hi Simon, Have you tried installing them with yum? On Wed, Apr 22, 2020 at 6:16 PM Simon Sutter wrote: > Hello everybody > > > In octopus there are some interesting looking features, so I tried to > upgrading my Centos 7 test nodes, according to: > https://docs.ceph.com/docs/master/releases/

[ceph-users] Re: some ceph general questions about the design

2020-04-20 Thread Khodayar Doustar
Hi Harald, - then i build a 3 node osd cluster and passtrough all disks and i install the mgr daemon on them - i build 3 seperate mon server and install here the rgw? right? As other friends suggested you can use VMs for mgr, mon and rgw, they are not so IOPS intensive and they are very flexible

[ceph-users] Re: : nautilus : progress section in ceph status is stuck

2020-04-20 Thread Khodayar Doustar
Hi Vasishta, Have you checked that osd's systemd log and perfcounters? You can check it's metadata and bluefs logs to see what's going on. Thanks, Khodayar On Mon, Apr 20, 2020 at 9:48 PM Vasishta Shastry wrote: > Hi, > > I upgraded a luminous cluster to nautilus and migrated Filestore OSD to

[ceph-users] Re: rbd device name reuse frequency

2020-04-20 Thread Khodayar Doustar
Hi Shridhar, As Ilya suggested : Use image names instead of device names, i.e. "rbd unmap myimage" instead of "rbd unmap /dev/rbd0". I think this will solve the problem. You just need to advice your orchestrator/hypervisor to use device names instead of rdb0... About udev rule I guess not, beca

[ceph-users] Re: OSDs get full with bluestore logs

2020-04-18 Thread Khodayar Doustar
on it might be doing that in yours? > > > On Apr 18, 2020, at 12:59 PM, Khodayar Doustar > wrote: > > > > Hi, > > > > I have a 3 node cluster of mimic with 9 osds (3 osds on each node). > > I use this cluster to test integration of an application with S3 a

[ceph-users] Re: some ceph general questions about osd and pg

2020-04-18 Thread Khodayar Doustar
Hi Harald, OSD count means the number of disks you are going to allocate to ceph, you can change the whole column by clicking on the "OSD #" at the top of the table. And there's some predefined recommendation for various use cases named: "Ceph Use Case Selector:" you can find it on the same page.

[ceph-users] OSDs get full with bluestore logs

2020-04-18 Thread Khodayar Doustar
Hi, I have a 3 node cluster of mimic with 9 osds (3 osds on each node). I use this cluster to test integration of an application with S3 api. The problem is that after a few days all OSD starts filling up with bluestore logs and goes down and out one by one! I cannot stop the logs and I cannot fi