have time.
Regards,
--
Mitsumasa KONDO
2024年4月16日(火) 15:35 Janne Johansson :
> Den mån 15 apr. 2024 kl 13:09 skrev Mitsumasa KONDO <
> kondo.mitsum...@gmail.com>:
> > Hi Menguy-san,
> >
> > Thank you for your reply. Users who use large IO with tiny volumes are a
&
Hi Anthony-san,
Thank you for your advice. I confirm my settings of my ceph cluster.
Autoscaler mode is on, so I had thought it's the best PGs. But the
autoscaler feature doesn't affect OSD's PGs. It's just for PG_NUM in
storage pools. Is that right?
Regards,
--
Mitsumasa
8GB volume, I had a feeling it wouldn't be distributed
well, but it will be distributed well.
Regards,
--
Mitsumasa KONDO
2024年4月15日(月) 15:29 Etienne Menguy :
> Hi,
>
> Volume size doesn't affect performance, cloud providers apply a limit to
> ensure they can deliver expect
https://docs.aws.amazon.com/ebs/latest/userguide/general-purpose.html
Regard,
--
Mitsumasa KONDO
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
) quincy
(stable)
1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f3be5cef520]
2: pthread_kill()
I think latest Ceph doesn't work in RDMA network with ConnectX 6 (mlx5
driver). My RDMA network is fine by other RDMA tools.
Regards,
--
Mitsumasa KONDO
2022年12月13日(火) 19:18 Mitsumasa KON
::process()+0x333) [0x7f59fa6422e3]\n 7:
(EventCenter::process_events(unsigned int, std::chrono::duration >*)+0xa74) [0x7f59fa69e6d4]\n 8:
/usr/lib64/ceph/libceph-common.so.2(+0x5fefa6) [0x7f59fa6a5fa6]\n 9:
/lib64/libstdc++.so.6(+0xc2ba3) [0x7f59f908fba3]\n 10:
/lib64/libpthread.so.0(+0x81ca) [
urity/limits.conf. What should I
do?
Regards,
--
Mitsumasa KONDO
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io