Re: [ceph-users] typo in news for PG auto-scaler

2019-04-05 Thread Junk
also $ ceph osd pool set foo pg_autoscaler_mode on pg_autoscaleR_mode should be $ ceph osd pool set foo pg_autoscale_mode on On Fri, 2019-04-05 at 08:05 +0200, Lars Täuber wrote: > Hi everybody! > There is a small mistake in the news about the PG autoscaler > https://ceph.com/rados/new-in-naut

[ceph-users] bluefs-bdev-expand experience

2019-04-05 Thread Yury Shevchuk
Hello all! We have a toy 3-node Ceph cluster running Luminous 12.2.11 with one bluestore osd per node. We started with pretty small OSDs and would like to be able to expand OSDs whenever needed. We had two issues with the expansion: one turned out user-serviceable while the other probably needs

Re: [ceph-users] bluefs-bdev-expand experience

2019-04-05 Thread Igor Fedotov
Hi Yuri, wrt Round 1 - an ability to expand block(main) device has been added to Nautilus, see: https://github.com/ceph/ceph/pull/25308 wrt Round 2: - not setting 'size' label looks like a bug although I recall I fixed it... Will double check. - wrong stats output is probably related to

Re: [ceph-users] bluefs-bdev-expand experience

2019-04-05 Thread Yury Shevchuk
On Fri, Apr 05, 2019 at 02:42:53PM +0300, Igor Fedotov wrote: > wrt Round 1 - an ability to expand block(main) device has been added to > Nautilus, > > see: https://github.com/ceph/ceph/pull/25308 Oh, that's good. But still separate wal&db may be good for studying load on each volume (blktrace) o

[ceph-users] unable to turn on pg_autoscale

2019-04-05 Thread Daniele Riccucci
Hello, I'm running a (very) small cluster and I'd like to turn on pg_autoscale. In the documentation here > http://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/ it tells me that running ceph config set global osd_pool_default_autoscale_mode should enable this by default, how

Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-05 Thread Casey Bodley
Hi Iain, Resharding is not supported in multisite. The issue is that the master zone needs to be authoritative for all metadata. If bucket reshard commands run on the secondary zone, they create new bucket instance metadata that the master zone never sees, so replication can't reconcile those chan

[ceph-users] Ceph Replication not working

2019-04-05 Thread Vikas Rana
Hi there, We are trying to setup a rbd-mirror replication and after the setup, everything looks good but images are not replicating. Can some please please help? Thanks, -Vikas root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool info nfs Mode: pool Peers: UUID

Re: [ceph-users] Ceph Replication not working

2019-04-05 Thread Jason Dillaman
What is the version of rbd-mirror daemon and your OSDs? It looks it found two replicated images and got stuck on the "wait_for_deletion" step. Since I suspect those images haven't been deleted, it should have immediately proceeded to the next step of the image replay state machine. Are there any ad

[ceph-users] Cephalocon Barcelona, May 19-20

2019-04-05 Thread Sage Weil
Hi everyone, This is a reminder that Cephalocon Barcelona is coming up next month (May 19-20), and it's going to be great! We have two days of Ceph content over four tracks, including: - A Rook tutorial for deploy Ceph over SSD instances - Several other Rook and Kubernetes related talks, inc

[ceph-users] VM management setup

2019-04-05 Thread jesper
Hi. Knowing this is a bit off-topic but seeking recommendations and advise anyway. We're seeking a "management" solution for VM's - currently in the 40-50 VM - but would like to have better access in managing them and potintially migrate them across multiple hosts, setup block devices, etc, etc.

Re: [ceph-users] VM management setup

2019-04-05 Thread Kenneth Van Alstyne
This is purely anecdotal (obviously), but I have found that OpenNebula is not only easy to setup, is relatively lightweight, and has very good Ceph support. 5.8.0 was recently released, but has a few bugs related to live migrations with Ceph as the backend datastore. You may want to look at 5.

Re: [ceph-users] VM management setup

2019-04-05 Thread Ronny Aasen
Proxmox VE is a simple solution. https://www.proxmox.com/en/proxmox-ve based on debian. can administer an internal ceph cluster or connect to an external connected . easy and almost self explanatory web interface. good luck in your search ! Ronny On 05.04.2019 21:34, jes...@krogh.cc wrot

Re: [ceph-users] VM management setup

2019-04-05 Thread Brad Hubbard
If you want to do containers at the same time, or transition some/all to containers at some point in future maybe something based on kubevirt [1] would be more futureproof? [1] http://kubevirt.io/ CNV is an example, https://www.redhat.com/en/resources/container-native-virtualization On Sat, Apr