Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com):
> After having upgraded my ceph cluster from Luminous to Nautilus 14.2.6 ,
> from time to time "ceph health detail" claims about some"Long heartbeat
> ping times on front/back interface seen".
>
> As far as I can understand (after having r
I can report similar results, although it's probably not just due to
cluster size.
Our cluster has 1248 OSDs at the moment and we have three active MDSs to
spread the metadata operations evenly. However, I noticed that it isn't
spread evenly at all. Usually, it's just one MDS (in our case mds.
Thanks for your answer
MON-MGR hosts have a mgmt network and a public network.
OSD nodes have instead a mgmt network, a public network. and a cluster
network
This is what I have in ceph.conf:
public network = 192.168.61.0/24
cluster network = 192.168.222.0/24
public and cluster networks are 10
Hello All,
I have a HW RAID based 240 TB data pool with about 200 million files for
users in a scientific institution. Data sizes range from tiny parameter
files for scientific calculations and experiments to huge images of
brain scans. There are group directories, home directories, Windows
r
Hi,
Quoting Massimo Sgaravatto (massimo.sgarava...@gmail.com):
> Thanks for your answer
>
> MON-MGR hosts have a mgmt network and a public network.
> OSD nodes have instead a mgmt network, a public network. and a cluster
> network
> This is what I have in ceph.conf:
>
> public network = 192.168
Hi,
Quoting Willi Schiegel (willi.schie...@technologit.de):
> Hello All,
>
> I have a HW RAID based 240 TB data pool with about 200 million files for
> users in a scientific institution. Data sizes range from tiny parameter
> files for scientific calculations and experiments to huge images of bra
On Wed, 29 Jan 2020 at 16:52, Matthew Vernon wrote:
> Hi,
>
> On 29/01/2020 16:40, Paul Browne wrote:
>
> > Recently we deployed a brand new Stein cluster however, and I'm curious
> > whether the idea of pointing the new OpenStack cluster at the same RBD
> > pools for Cinder/Glance/Nova as the Lu
Hi,
Quoting Paul Browne (pf...@cam.ac.uk):
> On Wed, 29 Jan 2020 at 16:52, Matthew Vernon wrote:
>
> > Hi,
> >
> > On 29/01/2020 16:40, Paul Browne wrote:
> >
> > > Recently we deployed a brand new Stein cluster however, and I'm curious
> > > whether the idea of pointing the new OpenStack cluste
Iam testing failure scenarios for my cluster. I have 3 monitors. Lets say if
mons 1 and 2 go down and so monitors can't form a quorum, how can I recover?
Are the instructions at followling link valid for deleting mons 1 and 2 from
monmap,
https://access.redhat.com/documentation/en-us/red_hat_c
On 1/30/20 1:34 PM, vis...@denovogroup.org wrote:
> Iam testing failure scenarios for my cluster. I have 3 monitors. Lets say if
> mons 1 and 2 go down and so monitors can't form a quorum, how can I recover?
>
> Are the instructions at followling link valid for deleting mons 1 and 2 from
> m
On Thu, Jan 30, 2020 at 1:38 PM Wido den Hollander wrote:
>
>
>
> On 1/30/20 1:34 PM, vis...@denovogroup.org wrote:
> > Iam testing failure scenarios for my cluster. I have 3 monitors. Lets say
> > if mons 1 and 2 go down and so monitors can't form a quorum, how can I
> > recover?
> >
> > Are th
On 1/30/20 1:55 PM, Gregory Farnum wrote:
> On Thu, Jan 30, 2020 at 1:38 PM Wido den Hollander wrote:
>>
>>
>>
>> On 1/30/20 1:34 PM, vis...@denovogroup.org wrote:
>>> Iam testing failure scenarios for my cluster. I have 3 monitors. Lets say
>>> if mons 1 and 2 go down and so monitors can't fo
We are looking to role out a all flash Ceph cluster as storage for our cloud
solution. The OSD's will be on slightly slower Micron 5300 PRO's, with WAL/DB
on Micron 7300 MAX NVMe's.
My main concern with Ceph being able to fit the bill is its snapshot abilities.
For each RBD we would like the
We are making hourly snapshots of ~400 rbd drives in one (spinning-rust)
cluster. The snapshots are made one by one.
Total size of the base images is around 80TB. The entire process takes a
few minutes.
We do not experience any problems doing this.
Op do 30 jan. 2020 om 15:30 schreef Adam Boyhan
Bastiaan Visser (bastiaan) writes:
> We are making hourly snapshots of ~400 rbd drives in one (spinning-rust)
> cluster. The snapshots are made one by one.
> Total size of the base images is around 80TB. The entire process takes a
> few minutes.
> We do not experience any problems doing this.
Den tors 30 jan. 2020 kl 15:29 skrev Adam Boyhan :
> We are looking to role out a all flash Ceph cluster as storage for our
> cloud solution. The OSD's will be on slightly slower Micron 5300 PRO's,
> with WAL/DB on Micron 7300 MAX NVMe's.
> My main concern with Ceph being able to fit the bill is i
Its my understanding that pool snapshots would basically require us to be in a
all or nothing situation were we would have to revert all RBD's in a pool. If
we could clone a pool snapshot for filesystem level access like a rbd snapshot,
that would help a ton.
Thanks,
Adam Boyhan
System & Net
I have osd nodes combined with mds,mgr and mon's. There are also running
a few VM's on them with libvirt, however client en cluster on ipv4 (and
no experience with ipv6). cluster network is on a switch not connected
to the internet.
- I should enable again ipv6
- enable forwarding so cluste
Is it possible to create an EC backed RBD via ceph-iscsi tools (gwcli,
rbd-target-api)? It appears that a pre-existing RBD created with the rbd
command can be imported, but there is no means to directly create an EC
backed RBD. The API seems to expect a single pool field in the body to work
with.
Did you end up having all new IPs for your MONs? I've wondered how
should a large KVM deployment be handled when the instance-metadata
has a hard-coded list of MON IPs for the cluster? how are they changed
en-masse with running VMs? or do these moves always result in at least
one MON with an origin
Thanks folks for the replies. Now I feel confident to test this out in my
cluster.
Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 1/31/20 12:09 AM, Nigel Williams wrote:
> Did you end up having all new IPs for your MONs? I've wondered how
> should a large KVM deployment be handled when the instance-metadata
> has a hard-coded list of MON IPs for the cluster? how are they changed
> en-masse with running VMs? or do these
22 matches
Mail list logo