[ceph-users] CephFS emptying files or silently failing to mount?

2013-06-11 Thread Bo
e show_location location.file_offset: 0 location.object_offset:0 location.object_no:0 location.object_size: 4194304 location.object_name: 100.0000 location.block_offset: 0 location.block_size: 4194304 location.osd: 4 -bo -- "But God demonstrates His own love toward us, i

Re: [ceph-users] CephFS emptying files or silently failing to mount?

2013-06-11 Thread Bo
Holy cow. Thank you for pointing out what should have been obvious. So glad these emails are kept on the web for future searchers like me ;) -bo On Tue, Jun 11, 2013 at 11:46 AM, Gregory Farnum wrote: > On Tue, Jun 11, 2013 at 9:39 AM, Bo wrote: > > howdy, y'all. > >

Re: [ceph-users] Live Migrations with cephFS

2013-06-14 Thread Bo
pull updated code from upstream into your deployment. -bo On 14.06.2013 12:55, Alvaro Izquierdo Jimeno wrote: > By default, openstack uses NFS but… other options are available….can we > use cephFS instead of NFS? Wouldn't you use qemu-rbd for your virtual guests in OpenStack? AFA

Re: [ceph-users] Live Migrations with cephFS

2013-06-16 Thread Bo
want. Thoughts? Corrections? Feel free to teach. -bo On Jun 16, 2013 9:44 AM, "Sebastien Han" wrote: > In OpenStack, a VM booted from a volume (where the disk is located on RBD) > supports the live-migration without any problems. > > > Sébastien Han > Cloud Engin

[ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Bo
hundreds/thousands who have had a monitor die. Thank you for your time and brain juice, -bo -- "But God demonstrates His own love toward us, in that while we were yet sinners, Christ died for us. Much more then, having now been justified by His blood, we shall be saved from the wrath of God th

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Bo
Thank you, Mike Sage and Greg. Completely different than everything I had heard or read. Clears it all up. :) Gracias, -bo On Thu, Jun 20, 2013 at 11:15 AM, Gregory Farnum wrote: > On Thursday, June 20, 2013, Bo wrote: > > > > Howdy! > > > > Loving workin

[ceph-users] HA and data recovery of CEPH

2019-11-28 Thread Peng Bo
Hi all, We are working on use CEPH to build our HA system, the purpose is the system should always provide service even a node of CEPH is down or OSD is lost. Currently, as we practiced once a node/OSD is down, the CEPH cluster needs to take about 40 seconds to sync data, our system can't provide

Re: [ceph-users] HA and data recovery of CEPH

2019-11-28 Thread Peng Bo
? If your 'min_size' is not smaller than 'size', then you > will lose availability. > > On Thu, Nov 28, 2019 at 10:50 PM Peng Bo wrote: > > > > Hi all, > > > > We are working on use CEPH to build our HA system, the purpose is the > system sh

Re: [ceph-users] HA and data recovery of CEPH

2019-12-11 Thread Peng Bo
Thanks to all, now we can make that duration to 25 seconds around, this is the best result as we can. BR On Tue, Dec 3, 2019 at 10:30 PM Wido den Hollander wrote: > > > On 12/3/19 3:07 PM, Aleksey Gutikov wrote: > > > >> That is true. When an OSD goes down it will take a few seconds for it's >