Hi,
I have a 3 node cluster of mimic with 9 osds (3 osds on each node).
I use this cluster to test integration of an application with S3 api.
The problem is that after a few days all OSD starts filling up with
bluestore logs and goes down and out one by one!
I cannot stop the logs and I cannot fi
Thanks a lot, I've run that and that was perfectly ok :)
On Mon, Jun 15, 2020 at 5:24 PM Matthew Vernon wrote:
> On 14/06/2020 17:07, Khodayar Doustar wrote:
>
> > Now I want to add the other two nodes as monitor and rgw.
> >
> > Can I just modify the ansible host
-users] Fwd: Re-run ansible to add monitor and RGWs
>
> Any ideas on this?
>
> -- Forwarded message -
> From: Khodayar Doustar
> Date: Sun, Jun 14, 2020 at 6:07 PM
> Subject: Re-run ansible to add monitor and RGWs
> To: ceph-users
>
>
> Hi,
>
>
Any ideas on this?
-- Forwarded message -
From: Khodayar Doustar
Date: Sun, Jun 14, 2020 at 6:07 PM
Subject: Re-run ansible to add monitor and RGWs
To: ceph-users
Hi,
I've installed my ceph cluster with ceph-ansible a few months ago. I've
just added one monitor a
Hi,
I've installed my ceph cluster with ceph-ansible a few months ago. I've
just added one monitor and one rgw at that time.
So I have 3 nodes, from which one is monitor and rgw and two others only
OSD.
Now I want to add the other two nodes as monitor and rgw.
Can I just modify the ansible hos
So this is your problem, it has nothing to do with Ceph. Just fix the
network or rollback all changes.
On Sun, May 24, 2020 at 9:05 AM Amudhan P wrote:
> No, ping with MTU size 9000 didn't work.
>
> On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar
> wrote:
>
> >
Does your ping work or not?
On Sun, May 24, 2020 at 6:53 AM Amudhan P wrote:
> Yes, I have set setting on the switch side also.
>
> On Sat 23 May, 2020, 6:47 PM Khodayar Doustar,
> wrote:
>
>> Problem should be with network. When you change MTU it should be changed
&g
Problem should be with network. When you change MTU it should be changed
all over the network, any single hup on your network should speak and
accept 9000 MTU packets. you can check it on your hosts with "ifconfig"
command and there is also equivalent commands for other network/security
devices.
I
) that policies are applied at the bucket
> level, so you would need a second bucket.
>
> Cheers,
> Tom
>
> On Tue, May 19, 2020 at 11:54 PM Khodayar Doustar
> wrote:
>
>> Hi,
>>
>> I'm using Nautilus and I'm using the whole cluster mainly for a
Anyone knows anything about this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I'm using Nautilus and I'm using the whole cluster mainly for a single
bucket in RadosGW.
There is a lot of data in this bucket (Petabyte scale) and I don't want to
waste all of SSD on it.
Is there anyway to automatically set some aging threshold for this data and
e.g. move any data older than
epel: pkg.adfinis-sygroup.ch
> > * extras: pkg.adfinis-sygroup.ch
> > * updates: pkg.adfinis-sygroup.ch
> > Warning: No matches found for: python3-cherrypy
> > No matches found
> >
> >
> > But as you can see, it cannot find it.
> >
> > Anything
t-simple.noarch 0.2.0-1.el7epel
> python36-jwt.noarch1.6.4-2.el7epel
>
>
> How do I get either: The right packages or a workaround because i can
> install the dependencies with pip?
>
>
> Regards,
>
> Sim
Hi Simon,
Have you tried installing them with yum?
On Wed, Apr 22, 2020 at 6:16 PM Simon Sutter wrote:
> Hello everybody
>
>
> In octopus there are some interesting looking features, so I tried to
> upgrading my Centos 7 test nodes, according to:
> https://docs.ceph.com/docs/master/releases/
Hi Harald,
- then i build a 3 node osd cluster and passtrough all disks and i install
the mgr daemon on them
- i build 3 seperate mon server and install here the rgw? right?
As other friends suggested you can use VMs for mgr, mon and rgw, they are
not so IOPS intensive and they are very flexible
Hi Vasishta,
Have you checked that osd's systemd log and perfcounters? You can check
it's metadata and bluefs logs to see what's going on.
Thanks,
Khodayar
On Mon, Apr 20, 2020 at 9:48 PM Vasishta Shastry
wrote:
> Hi,
>
> I upgraded a luminous cluster to nautilus and migrated Filestore OSD to
Hi Shridhar,
As Ilya suggested :
Use image names instead of device names, i.e. "rbd unmap myimage"
instead of "rbd unmap /dev/rbd0".
I think this will solve the problem. You just need to advice your
orchestrator/hypervisor to use device names instead of rdb0...
About udev rule I guess not, beca
on it might be doing that in yours?
>
> > On Apr 18, 2020, at 12:59 PM, Khodayar Doustar
> wrote:
> >
> > Hi,
> >
> > I have a 3 node cluster of mimic with 9 osds (3 osds on each node).
> > I use this cluster to test integration of an application with S3 a
Hi Harald,
OSD count means the number of disks you are going to allocate to ceph, you
can change the whole column by clicking on the "OSD #" at the top of the
table.
And there's some predefined recommendation for various use cases named: "Ceph
Use Case Selector:" you can find it on the same page.
Hi,
I have a 3 node cluster of mimic with 9 osds (3 osds on each node).
I use this cluster to test integration of an application with S3 api.
The problem is that after a few days all OSD starts filling up with
bluestore logs and goes down and out one by one!
I cannot stop the logs and I cannot fi
20 matches
Mail list logo