Yes, reweight a lot before the upgrade. But from the output of ceph osd df, the
reweight values were kept instead of 1 after the upgrade. Of course, the old
reweight values are meaningless after the mass data migration.
br,
Xu Yun
> 2020年1月20日 下午10:19,Wido den Hollander 写道:
>
>
>
> On 1/20/
We have a small ceph cluster running built from components that were
phased out from compute applications. the current cluster consists of
i7-860s. 6 disks (5TB, 7200RPM) per node and 8 nodes totaling 48 OSDs.
A compute cluster will be discontinued, which will make Ryzen 5-1600
hardware available
It seems that people now split between new and old list servers.
Regardless of either one of them, I am missing a number of messages that
appear on archive pages but never seem to make to my inbox. And no they
are not in my junk folder. I wonder if some of my questions are not
getting a response
On 01/20/2020 10:29 AM, Gesiel Galvão Bernardes wrote:
> Hi,
>
> Only now have I been able to act on this problem. My environment is
> relatively simple: I have two ESXi 6.7 hosts, connected to two ISCSI
> gateways, using two RBD images.
>
> When this mail started, the workaround was to keep onl
Hi,
Pretty certain not. I hit that exact issue. The workaround I was
suggested to use was an init container running as root to change the
ownership. That works ok but is very hacky.
--
*Kevin Thorpe*
VP of Enterprise Platform
*W* *|* www.predictx.com
*P * *|* +44 (0)20 3005 6750 <+44%20
Is it possible to mount a cephfs with a specific uid or gid? To make it
available to a 'non-root' user?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Only now have I been able to act on this problem. My environment is
relatively simple: I have two ESXi 6.7 hosts, connected to two ISCSI
gateways, using two RBD images.
When this mail started, the workaround was to keep only one ISCSI gateway
connected, so it works normally. After the answer
Quoting Wido den Hollander (w...@42on.com):
>
>
> On 1/19/20 12:07 PM, Stefan Kooman wrote:
> > Hi,
> >
> > Is there any logic / filtering which PGs to backfill at any given time
> > that takes into account the OSD the PG is living on?
> >
> > Our cluster is backfilling a complete pool now (512
Hello,
I am currently evaluating Ceph for our needs and I have a question
about the 'object append' feature. I note that the rados core API
supports an 'append' operation, and the S3-compatible interface has
too.
My question is: does Ceph support concurrent append? I would like to
use Ceph as a t
On 1/19/20 12:07 PM, Stefan Kooman wrote:
> Hi,
>
> Is there any logic / filtering which PGs to backfill at any given time
> that takes into account the OSD the PG is living on?
>
> Our cluster is backfilling a complete pool now (512 PGs) and (currently)
> of the 7 active+remapped+backfilling
On 1/20/20 1:07 AM, 徐蕴 wrote:
> Hi,
>
> We upgraded our cluster from Jewel to Luminous, and it turned out that more
> than 80% object misplaced. Since our cluster has 130T data, backfilling seems
> take forever. We didn’t modify any crushmap. Any thoughts about this issue?
Did you reweight yo
11 matches
Mail list logo