also
$ ceph osd pool set foo pg_autoscaler_mode on
pg_autoscaleR_mode should be
$ ceph osd pool set foo pg_autoscale_mode on
On Fri, 2019-04-05 at 08:05 +0200, Lars Täuber wrote:
> Hi everybody!
> There is a small mistake in the news about the PG autoscaler
> https://ceph.com/rados/new-in-naut
Hello all!
We have a toy 3-node Ceph cluster running Luminous 12.2.11 with one
bluestore osd per node. We started with pretty small OSDs and would
like to be able to expand OSDs whenever needed. We had two issues
with the expansion: one turned out user-serviceable while the other
probably needs
Hi Yuri,
wrt Round 1 - an ability to expand block(main) device has been added to
Nautilus,
see: https://github.com/ceph/ceph/pull/25308
wrt Round 2:
- not setting 'size' label looks like a bug although I recall I fixed
it... Will double check.
- wrong stats output is probably related to
On Fri, Apr 05, 2019 at 02:42:53PM +0300, Igor Fedotov wrote:
> wrt Round 1 - an ability to expand block(main) device has been added to
> Nautilus,
>
> see: https://github.com/ceph/ceph/pull/25308
Oh, that's good. But still separate wal&db may be good for studying
load on each volume (blktrace) o
Hello,
I'm running a (very) small cluster and I'd like to turn on pg_autoscale.
In the documentation here >
http://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/ it
tells me that running
ceph config set global osd_pool_default_autoscale_mode
should enable this by default, how
Hi Iain,
Resharding is not supported in multisite. The issue is that the master zone
needs to be authoritative for all metadata. If bucket reshard commands run
on the secondary zone, they create new bucket instance metadata that the
master zone never sees, so replication can't reconcile those chan
Hi there,
We are trying to setup a rbd-mirror replication and after the setup,
everything looks good but images are not replicating.
Can some please please help?
Thanks,
-Vikas
root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool info nfs
Mode: pool
Peers:
UUID
What is the version of rbd-mirror daemon and your OSDs? It looks it
found two replicated images and got stuck on the "wait_for_deletion"
step. Since I suspect those images haven't been deleted, it should
have immediately proceeded to the next step of the image replay state
machine. Are there any ad
Hi everyone,
This is a reminder that Cephalocon Barcelona is coming up next month (May
19-20), and it's going to be great! We have two days of Ceph content over
four tracks, including:
- A Rook tutorial for deploy Ceph over SSD instances
- Several other Rook and Kubernetes related talks, inc
Hi. Knowing this is a bit off-topic but seeking recommendations
and advise anyway.
We're seeking a "management" solution for VM's - currently in the 40-50
VM - but would like to have better access in managing them and potintially
migrate them across multiple hosts, setup block devices, etc, etc.
This is purely anecdotal (obviously), but I have found that OpenNebula is not
only easy to setup, is relatively lightweight, and has very good Ceph support.
5.8.0 was recently released, but has a few bugs related to live migrations with
Ceph as the backend datastore. You may want to look at 5.
Proxmox VE is a simple solution.
https://www.proxmox.com/en/proxmox-ve
based on debian. can administer an internal ceph cluster or connect to
an external connected . easy and almost self explanatory web interface.
good luck in your search !
Ronny
On 05.04.2019 21:34, jes...@krogh.cc wrot
If you want to do containers at the same time, or transition some/all
to containers at some point in future maybe something based on
kubevirt [1] would be more futureproof?
[1] http://kubevirt.io/
CNV is an example,
https://www.redhat.com/en/resources/container-native-virtualization
On Sat, Apr
13 matches
Mail list logo