On Jan 9, 2020, at 2:16 PM, Ilya Dryomov
mailto:idryo...@gmail.com>> wrote:
On Thu, Jan 9, 2020 at 2:52 PM Kyriazis, George
mailto:george.kyria...@intel.com>> wrote:
Hello ceph-users!
My setup is that I’d like to use RBD images as a replication target of a
FreeNAS zfs pool.
On Jan 9, 2020, at 9:27 AM, Stefan Kooman mailto:ste...@bit.nl>>
wrote:
Quoting Kyriazis, George
(george.kyria...@intel.com<mailto:george.kyria...@intel.com>):
On Jan 9, 2020, at 8:00 AM, Stefan Kooman mailto:ste...@bit.nl>>
wrote:
Quoting Kyriazis, George
(george.
> On Jan 9, 2020, at 8:00 AM, Stefan Kooman wrote:
>
> Quoting Kyriazis, George (george.kyria...@intel.com):
>
>> The source pool has mainly big files, but there are quite a few
>> smaller (<4KB) files that I’m afraid will create waste if I create the
>> d
Hello ceph-users!
My setup is that I’d like to use RBD images as a replication target of a
FreeNAS zfs pool. I have a 2nd FreeNAS (in a VM) to act as a backup target in
which I mount the RBD image. All this (except the source FreeNAS server) is in
Proxmox.
Since I am using RBD as a backup ta
.
Anybody has any thoughts? How can I change pg_num (and also pgp_num)?
Thanks!
George
> On Sep 12, 2019, at 7:49 AM, Kyriazis, George
> wrote:
>
> Hi Burkhard,
>
> I tried using the autoscaler, however it did not give a suggestion to resize
> pg_num. Since my pg_nu
autoscaler will work, either, when the time comes. The autoscaler pg_num
changes would follow the same execution path as manual changes, won’t they?
Thanks!
George
> On Sep 12, 2019, at 4:37 AM, Burkhard Linke
> wrote:
>
> Hi,
>
> On 9/12/19 5:16 AM, Kyriazis, George wrote:
&
# ceph osd pool get rbd1 pg_num
pg_num: 100
#
Suggestions, anybody?
Thanks!
George
On Sep 11, 2019, at 5:29 PM, Kyriazis, George
mailto:george.kyria...@intel.com>> wrote:
No, it’s pg_num first, then pgp_num.
Found the problem, and still slowly working on fixing it.
I upgraded from mi
7;t have to increase pgp_num first?
On Wed, Sep 11, 2019 at 6:23 AM Kyriazis, George
mailto:george.kyria...@intel.com>> wrote:
I have the same problem (nautilus installed), but the proposed command gave me
an error:
# ceph osd require-osd-release nautilus
Error EPERM: not
I have the same problem (nautilus installed), but the proposed command gave me
an error:
# ceph osd require-osd-release nautilus
Error EPERM: not all up OSDs have CEPH_FEATURE_SERVER_NAUTILUS feature
#
I created my cluster with mimic and then upgraded to nautilus.
What would be my next step?
Hello Ceph-users,
I am currently testing / experimenting with Ceph with some extra hardware that
is laying around. I am running Nautilus on Ubuntu 18.04 (all nodes).
The problem statement is that I’d like to backup a FreeNAS server using ZFS
Snapshots and replication to a Ceph cluster.
I crea
10 matches
Mail list logo