On Fri, Apr 19, 2019 at 6:53 PM Varun Singh wrote:
>
> On Fri, Apr 19, 2019 at 10:44 AM Varun Singh wrote:
> >
> > On Thu, Apr 18, 2019 at 9:53 PM Siegfried Höllrigl
> > wrote:
> > >
> > > Hi !
> > >
> > > I am not 100% sure, but i think, --net=host does not propagate /dev/
> > > inside the cona
Hi All,
I still see this issue with latest ceph Luminous 12.2.11 and 12.2.12.
I have set bluestore_min_alloc_size = 4096 before the test.
when I write 10 small objects less than 64KB through rgw, the RAW USED
showed in "ceph df" looks incorrect.
For example, I test three times and clean up
Hello,
firstly, this has been discussed here in many incarnation.
And is likely the reason for the silence, a little research goes a long
way.
For starters, do yourself a favor and monitor your Ceph nodes with atop
or collect/graph everything at a very low resolution (5s at least) to get
an ide
Glad it worked.
On Mon, Apr 22, 2019 at 11:01 AM Can Zhang wrote:
>
> Thanks for your detailed response.
>
> I freshly installed a CentOS 7.6 and run install-deps.sh and
> do_cmake.sh this time, and it works this time. Maybe the problem was
> caused by dirty environment.
>
>
> Best,
> Can Zhang
>
Thanks for your detailed response.
I freshly installed a CentOS 7.6 and run install-deps.sh and
do_cmake.sh this time, and it works this time. Maybe the problem was
caused by dirty environment.
Best,
Can Zhang
On Fri, Apr 19, 2019 at 6:28 PM Brad Hubbard wrote:
>
> OK. So this works for me wi
On Sun, Apr 21, 2019 at 03:11:44PM +0200, Marc Roos wrote:
> Double thanks for the on-topic reply. The other two repsonses, were
> making me doubt if my chinese (which I didn't study) is better than my
> english.
They were almost on topic, but not that useful. Please don't imply
language failings
Double thanks for the on-topic reply. The other two repsonses, were
making
me doubt if my chinese (which I didn't study) is better than my english.
>> I am a bit curious on how production ceph clusters are being used. I
am
>> reading here that the block storage is used a lot with openstack
Just updated luminous, and setting max_scrubs value back. Why do I get
osd's reporting differently
I get these:
osd.18: osd_max_scrubs = '1' (not observed, change may require restart)
osd_objectstore = 'bluestore' (not observed, change may require restart)
rocksdb_separate_wal_dir = 'false