Hi all.
If cephfs client is in a slow or unreliable network environment, the client
will be added to OSD blacklist and OSD map, and the default duration is 1 hour.
During this time, the client will be forbidden to access CEPH. If I want to
solve this problem and ensure client's normal I/O operat
Hi Tadas,
I also noticed the same issue few days ago.
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GDUSELT7B3NY7NBU2XHZP6CRHE3OSD6A/
I have reported it to the developers via the ceph-devel IRC. I was told
that it will be fixed on the coming Friday the earlist.
Hong
On Wed, 2
Got the same problem when we export dirs of CephFS through samba +
samba-vfs-cephfs.
As a contrast, with the same ceph cluster, we export dirs of CephFS throug
nfs-ganesha + nfs-ganesha-ceph, and the admin socket(generated by libcephfs)
work fines.
ceph version: 14.2.5
samba version: 4.8.8
OS:
I upgraded one cluster to 14.2.10 and this perf counter is still growing.
Does any have an idea of how to debug this problem?
Jacek
sob., 4 lip 2020 o 18:49 Simon Leinen napisał(a):
> Jacek Suchenia writes:
> > On two of our clusters (all v14.2.8) we observe a very strange behavior:
>
> > Ove
Thanks for your reply. It's helpful! We may consider to adjust
min_alloc_size to a lower value or take other actions based on
your analysis for space overhead with EC pools. Thanks.
Best
Jerry Pu
Igor Fedotov 於 2020年7月7日 週二 下午4:10寫道:
> I think you're facing the issue covered by the following ti
Please note that simple min_alloc_size downsizing might negatively
impact OSD performance. That's why this modification has been postponed
till Pacific - we've made a bunch of additional changes to eliminate the
drop.
Regards,
Igor
On 7/8/2020 12:32 PM, Jerry Pu wrote:
Thanks for your repl
OK. Thanks for your reminder. We will think about how to make the
adjustment to our cluster.
Best
Jerry Pu
Igor Fedotov 於 2020年7月8日 週三 下午5:40寫道:
> Please note that simple min_alloc_size downsizing might negatively impact
> OSD performance. That's why this modification has been postponed till
>
From my point of view, preventing clients from being added to blacklists
may be better if you
are in a poor network environment. AFAIK the server will send signals to
clients frequently. And
it will add someone to the blacklist if it doesn't receive a reply.
<380562...@qq.com> 于2020年7月8日周三 下午3:01写
Please strace both virsh and libvirtd (you can attach to it by pid),
and make sure that the strace command uses the "-f" switch (i.e.
traces all threads).
On Wed, Jul 8, 2020 at 6:20 PM Andrei Mikhailovsky wrote:
>
> Jason,
>
> After adding the 1:storage to the log line of the config and restarti
Jason, this is what I currently have:
log_filters="1:libvirt 1:util 1:qemu"
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
I will add the 1:storage and send more logs.
Thanks for trying to help.
Andrei
- Original Message -
> From: "Jason Dillaman"
> To: "Andrei Mikhailovsky"
> C
Hi,
have you tried restarting all osds?
Am 08.07.2020 um 15:43 schrieb Ml Ml:
> Hello,
>
> ceph is stuck since 4 days with 0.064% misplaced and i dunno why. Can
> anyone help me to get it fixed?
> I did restart some OSDs and reweight them again to get some data
> moving but that did not help.
Jason,
After adding the 1:storage to the log line of the config and restarting the
service I do not see anything in the logs. I've started the "virsh pool-list"
command several times and there is absolutely nothing in the logs. The command
keeps hanging
running the strace virsh pool-list show
Hello,
ceph is stuck since 4 days with 0.064% misplaced and i dunno why. Can
anyone help me to get it fixed?
I did restart some OSDs and reweight them again to get some data
moving but that did not help.
root@node01:~ # ceph -s
cluster:
id: 251c937e-0b55-48c1-8f34-96e84e4023d4
health: HEALTH_WARN
Alexander, here you go:
1. strace of the libvirtd -l process:
root@ais-cloudhost1:/etc/libvirt# strace -f -p 53745
strace: Process 53745 attached with 17 threads
[pid 53786] futex(0x7fd90c0f4618,
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 2, NULL, 0x
[pid 53785] futex(0x55699ad13
Hello,
My understanding is that the time to format an RBD volume is not dependent
on its size as the RBD volumes are thin provisioned. Is this correct?
For example, formatting a 1G volume should take almost the same time as
formatting a 1TB volume - although accounting for differences in latencie
Hi,
For this post:
https://ceph.io/community/bluestore-default-vs-tuned-performance-comparison/
I don't see a way to contact the authors so I thought I would try here.
Does anyone know how the rocksdb tuning parameters of:
"
bluestore_rocksdb_options =
compression=kNoCompression,max_write_buff
On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill wrote:
>
> Hello,
>
> My understanding is that the time to format an RBD volume is not dependent
> on its size as the RBD volumes are thin provisioned. Is this correct?
>
> For example, formatting a 1G volume should take almost the same time as
> forma
On Wed, Jul 8, 2020 at 4:56 PM Jason Dillaman wrote:
> On Wed, Jul 8, 2020 at 3:28 PM Void Star Nill
> wrote:
> >
> > Hello,
> >
> > My understanding is that the time to format an RBD volume is not
> dependent
> > on its size as the RBD volumes are thin provisioned. Is this correct?
> >
> > For
Do you have pg_autoscaler enabled or the balancer module?
Zitat von Ml Ml :
Hello,
ceph is stuck since 4 days with 0.064% misplaced and i dunno why. Can
anyone help me to get it fixed?
I did restart some OSDs and reweight them again to get some data
moving but that did not help.
root@node01:
19 matches
Mail list logo