I’m trying to understand this situation:
ceph health detail
HEALTH_WARN Reduced data availability: 33 pgs inactive
[WRN] PG_AVAILABILITY: Reduced data availability: 33 pgs inactive
pg 1.0 is stuck inactive for 20h, current state unknown, last acting []
pg 2.0 is stuck inactive for 20h, cur
So I did this:
ceph osd crush rule create-replicated hdd-rule default rack hdd
[ceph: root@cn01 ceph]# ceph osd crush rule ls
replicated_rule
hdd-rule
ssd-rule
[ceph: root@cn01 ceph]# ceph osd crush rule dump hdd-rule
{
"rule_id": 1,
"rule_name": "hdd-rule",
"ruleset": 1,
"type":
Peter,
We're seeing the same issues as you are. We have 2 new hosts Intel(R)
Xeon(R) Gold 6248R CPU @ 3.00GHz w/ 48 cores, 384GB RAM, and 60x 10TB SED
drives and we have tried both 15.2.13 and 16.2.4
Cephadm does NOT properly deploy and activate OSDs on Ubuntu 20.04.2 with
Docker.
Seems to be a
FYI, I'm getting monitors assigned via '... apply label:mon' with
current and valid 'mon' tags: 'committing suicide' after surprise
reboots in the 'Pacific' 16.2.4 release. The tag indicating a monitor
should be assigned to that host is present and never changed.
Deleting the mon tag, waiting a
Hi Reed,
To add to this command by Weiwen:
On 28.05.21 13:03, 胡 玮文 wrote:
Have you tried just start multiple rsync process simultaneously to transfer
different directories? Distributed system like ceph often benefits from more
parallelism.
When I migrated from XFS on iSCSI (legacy system, n
There is also a longstanding belief that using cpio saves you context switches
and data through a pipe. ymmv.
> On May 28, 2021, at 7:26 AM, Reed Dier wrote:
>
> I had it on my list of things to possibly try, a tar in | tar out copy to see
> if it yielded different results.
>
> On its face,
I had it on my list of things to possibly try, a tar in | tar out copy to see
if it yielded different results.
On its face, it seems like cp -a is getting ever so slightly better speed, but
not a clear night and day difference.
I will definitely look into this and report back any findings, posi
I guess I should probably have been more clear, this is one pool of many, so
the other OSDs aren't idle.
So I don't necessarily think that the PG bump would be the worst thing to try,
but its definitely not as bad as I may have made it sound.
Thanks,
Reed
> On May 27, 2021, at 11:37 PM, Anthon
I’m continuing to read and it’s becoming more clear.
The CRUSH map seems pretty amazing!
-jeremy
> On May 28, 2021, at 1:10 AM, Jeremy Hansen wrote:
>
> Thank you both for your response. So this leads me to the next question:
>
> ceph osd crush rule create-replicated
>
>
> What is a
Hi Reed,
Have you tried just start multiple rsync process simultaneously to transfer
different directories? Distributed system like ceph often benefits from more
parallelism.
Weiwen Hu
> 在 2021年5月28日,03:54,Reed Dier 写道:
>
> Hoping someone may be able to help point out where my bottleneck(s)
On 5/27/21 10:47 PM, Michael Thomas wrote:
Is there a way to log or track which cephfs files are being accessed?
This would help us in planning where to place certain datasets based on
popularity, eg on a EC HDD pool or a replicated SSD pool.
I know I can run inotify on the ceph clients, but I
I managed by experimenting how to get rid of that wrongly created MDS service,
so for those who are looking for that information too, I used the following
command:
ceph orch rm mds.label:mds
‐‐‐ Original Message ‐‐‐
On Thursday, May 27, 2021 9:16 PM, mabi wrote:
> Hello,
>
> I am try
On Thu, May 27, 2021 at 02:54:00PM -0500, Reed Dier wrote:
> Hoping someone may be able to help point out where my bottleneck(s) may be.
>
> I have an 80TB kRBD image on an EC8:2 pool, with an XFS filesystem on top of
> that.
> This was not an ideal scenario, rather it was a rescue mission to dum
Thank you both for your response. So this leads me to the next question:
ceph osd crush rule create-replicated
What is and in this case?
It also looks like this is responsible for things like “rack awareness” type
attributes which is something I’d like to utilize.:
# types
type 0 osd
t
Create a crush rule that only chooses non-ssd drives, then
ceph osd pool set crush_rule YourNewRuleName
and it will move over to the non-ssd OSDs.
Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen :
>
>
> I’m very new to Ceph so if this question makes no sense, I apologize.
> Continuing to study
15 matches
Mail list logo