[ceph-users] Real disk usage of clone images

2017-10-07 Thread Josy
Hi, Not sure if this is a good/valid question. I have deployed a lot of VMs in a ceph cluster that were cloned from an original rbd image. I want to see how much of the original image is a new VM (cloned image)  using. Is there any command to get such details? == $ rbd info

Re: [ceph-users] Real disk usage of clone images

2017-10-07 Thread Jason Dillaman
The "rbd du" command will calculate how much space a clone is using, as well as individual snapshots. $ rbd du my-pool NAME PROVISIONED USED clone@1 10G 512M clone@2 10G 64M clone10G 512M parent@1 10G1G parent 10G 0 20G 2.0

[ceph-users] Configuring Ceph using multiple networks

2017-10-07 Thread Kashif Mumtaz
I have successfully installed Luminous on Ubutnu  16.04 by one network.   Now I am trying to install same by using two networksthis.(on different machine)public network=   192.168.10.0/24 cluster network=  172.16.50.0/24  Eeach node has two interfaces. One in public networkother in cluster net

[ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Peter Linder
Hello Ceph-users! Ok, so I've got 3 separate datacenters (low latency network in between) and I want to make a hybrid NMVe/HDD pool for performance and cost reasons. There are 3 servers with NVMe based OSDs, and 2 servers with normal HDDS (Yes, one is missing, will be 3 of course. It needs some m

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Дробышевский , Владимир
Hello! 2017-10-07 19:12 GMT+05:00 Peter Linder : The idea is to select an nvme osd, and > then select the rest from hdd osds in different datacenters (see crush > map below for hierarchy). > > It's a little bit aside of the question, but why do you want to mix SSDs and HDDs in the same pool? Do y

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Peter Linder
On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote: > Hello! > > 2017-10-07 19:12 GMT+05:00 Peter Linder >: > > The idea is to select an nvme osd, and > then select the rest from hdd osds in different datacenters (see crush > map below for hierarchy)

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Sinan Polat
You are talking about the min_size, which should be 2 according to your text. Please be aware, the min_size in your CRUSH is _not_ the replica size. The replica size is set with your pools. > Op 7 okt. 2017 om 19:39 heeft Peter Linder het > volgende geschreven: > >> On 10/7/2017 7:36 PM, Дроб

Re: [ceph-users] Configuring Ceph using multiple networks

2017-10-07 Thread Sinan Polat
Why do your put your mons inside your cluster network, shouldn't they reisde within the public network? Cluster network is only for replica data / for traffic between your osds. > Op 7 okt. 2017 om 14:32 heeft Kashif Mumtaz het > volgende geschreven: > > > > I have successfully installed Lu

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Peter Linder
Yes, I realized that, I updated it to 3. On 10/7/2017 8:41 PM, Sinan Polat wrote: > You are talking about the min_size, which should be 2 according to > your text. > > Please be aware, the min_size in your CRUSH is _not_ the replica size. > The replica size is set with your pools. > > Op 7 okt. 20

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Peter Linder
On 10/7/2017 8:08 PM, David Turner wrote: > > Just to make sure you understand that the reads will happen on the > primary osd for the PG and not the nearest osd, meaning that reads > will go between the datacenters. Also that each write will not ack > until all 3 writes happen adding the latency t

Re: [ceph-users] Bareos and libradosstriper works only for 4M sripe_unit size

2017-10-07 Thread Alexander Kushnirenko
Hi, Gregory! It turns out that this error is internal CEPH feature. I wrote standalone program to create 132M object in striper mode. It works only for 4M stripe. If you set stripe_unit = 2M it still creates 4M stripe_unit. Anything bigger than 4M causes crash here

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread David Turner
Disclaimer, I have never attempted this configuration especially with Luminous. I doubt many have, but it's a curious configuration that I'd love to help see if it is possible. There is 1 logical problem with your configuration (which you have most likely considered). If you want all of your PGs

Re: [ceph-users] Luminous cluster stuck when adding monitor

2017-10-07 Thread Nico Schottelius
Good evening Joao, we double checked our MTUs, they are all 9200 on the servers and 9212 on the switches. And we have no problems transferring big files in general (as opennebula copies around images for importing, we do this quite a lot). So if you could have a look, it would be much appreciate

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Peter Linder
On 10/7/2017 10:41 PM, David Turner wrote: > Disclaimer, I have never attempted this configuration especially with > Luminous. I doubt many have, but it's a curious configuration that I'd > love to help see if it is possible. Very generous of you :). (With that said, I suppose we are prepared to pa

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread David Turner
Just to make sure you understand that the reads will happen on the primary osd for the PG and not the nearest osd, meaning that reads will go between the datacenters. Also that each write will not ack until all 3 writes happen adding the latency to the writes and reads both. On Sat, Oct 7, 2017, 1

Re: [ceph-users] PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

2017-10-07 Thread Дробышевский , Владимир
2017-10-08 2:02 GMT+05:00 Peter Linder : > > Then, I believe, the next best configuration would be to set size for this > pool to 4. It would choose an NVMe as the primary OSD, and then choose an > HDD from each DC for the secondary copies. This will guarantee that a copy > of the data goes into