> -Original Message-
> From: Adrian Saul [mailto:adrian.s...@tpgtelecom.com.au]
> Sent: 19 June 2017 06:54
> To: n...@fisk.me.uk; 'Alex Gorbachev'
> Cc: 'ceph-users'
> Subject: RE: [ceph-users] VMware + CEPH Integration
>
> > Hi Alex,
> Hi Alex,
>
> Have you experienced any problems with timeouts in the monitor action in
> pacemaker? Although largely stable, every now and again in our cluster the
> FS and Exportfs resources timeout in pacemaker. There's no mention of any
> slow requests or any peering..etc from the ceph logs so
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Alex Gorbachev
> Sent: 16 June 2017 01:48
> To: Osama Hasebou
> Cc: ceph-users
> Subject: Re: [ceph-users] VMware + CEPH Integration
>
> On Thu, Jun 15, 201
On Thu, Jun 15, 2017 at 5:29 AM, Osama Hasebou wrote:
> Hi Everyone,
>
> We would like to start testing using VMware with CEPH storage. Can people
> share their experience with production ready ideas they tried and if they
> were successful?
>
> I have been reading lately that either NFS or iSCSI
osd_heartbeat_grace is a setting for how many seconds since the last time
an osd received a successful response from another osd before telling the
mons that it's down. This is one you may want to lower from its default
value of 20 seconds.
mon_osd_min_down_reporters is a setting for how many osd
> On 15 Jun 2017, at 14:24, David Byte wrote:
> Overall, performance is good. There are a few different approaches to
> mitigate the timeouts that happen for OSD failure detection.
> 1 – tune the thresholds for failure detection
This may have been blogged somewhere, but do you have any detai
, ceph-users
Subject: Re: [ceph-users] VMware + CEPH Integration
Hi,
Please check the PetaSAN project
www.petasan.org<http://www.petasan.org>
We provide clustered iSCSI using LIO/Ceph rbd and Consul for HA.
Works well with VMWare.
/Maged
From: Osama Hasebou<mailto:osama.hase...@csc
Hi,
Please check the PetaSAN project
www.petasan.org
We provide clustered iSCSI using LIO/Ceph rbd and Consul for HA.
Works well with VMWare.
/Maged
From: Osama Hasebou
Sent: Thursday, June 15, 2017 12:29 PM
To: ceph-users
Subject: [ceph-users] VMware + CEPH Integration
Hi Everyone
Shameless plug... Ceph really shines for VM hosting when you can use rbds
instead of a filesystem with virtual disks on it. And better yet when your
hypervisor can use librbd to interface with your rbds. Anything less is
missing out on what ceph really has to offer. I understand that a lot of
envir
> On 15 Jun 2017, at 10:29, Osama Hasebou wrote:
>
> We would like to start testing using VMware with CEPH storage. Can people
> share their experience with production ready ideas they tried and if they
> were successful?
We are doing this with 4 OSD nodes (44 OSDs total), 3 separate monitor
Hi Everyone,
We would like to start testing using VMware with CEPH storage. Can people share
their experience with production ready ideas they tried and if they were
successful?
I have been reading lately that either NFS or iSCSI are possible with some
server acting as a gateway in between C
11 matches
Mail list logo