Re: [ceph-users] OSDs going down when we bring down some OSD nodes Or cut-off the cluster network link between OSD nodes

2016-08-07 Thread Christian Balzer
[Reduced to ceph-users, this isn't community related] Hello, On Sat, 6 Aug 2016 20:23:41 +0530 Venkata Manojawa Paritala wrote: > Hi, > > We have configured single Ceph cluster in a lab with the below > specification. > > 1. Divided the cluster into 3 logical sites (SiteA, SiteB & SiteC). Thi

Re: [ceph-users] OSDs going down when we bring down some OSD nodes Or cut-off the cluster network link between OSD nodes

2016-08-07 Thread Shinobu Kinjo
On Sun, Aug 7, 2016 at 6:56 PM, Christian Balzer wrote: > > [Reduced to ceph-users, this isn't community related] > > Hello, > > On Sat, 6 Aug 2016 20:23:41 +0530 Venkata Manojawa Paritala wrote: > >> Hi, >> >> We have configured single Ceph cluster in a lab with the below >> specification. >> >>

Re: [ceph-users] Giant to Jewel poor read performance with Rados bench

2016-08-07 Thread David
I created a new pool that only contains OSDs on a single node. The Rados bench gives me the speed I'd expect (1GB/s...all coming out of cache) I then created a pool that contains OSDs from 2 nodes. Now the strange part is, if I run the Rados bench from either of those nodes, I get the speed I'd ex

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-07 Thread Alex Gorbachev
> I'm confused. How can a 4M discard not free anything? It's either > going to hit an entire object or two adjacent objects, truncating the > tail of one and zeroing the head of another. Using rbd diff: > > $ rbd diff test | grep -A 1 25165824 > 25165824 4194304 data > 29360128 4194304 data >

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-07 Thread Alex Gorbachev
On Friday, August 5, 2016, matthew patton wrote: > > - ESXI's VMFS5 is aligned on 1MB, so 4MB discards never actually free > anything > > the proper solution here is to: > * quit worrying about it and buy sufficient disk in the first place, it's > not exactly expensive > I would do that for one