Hi,
I am want to using alluxio to speed up the read/write cephfs, so want to ask if
anyone already did this ? Any wiki or experience to share how to setup the
environment?
I know there is a wiki about alluxio using cephfs as backend storage
https://docs.alluxio.io/os/user/stable/en/ufs/CephF
Dear All,
We've just encountered a problem with enabling grafana embedded
dashboards in a 100% container free install of 16.2.6
It seems that the dashboard now needs the data source in grafana to be
named "Dashboard1".
You would typically see this setting on a grafana server here:
http://c
Hi,
I plan to use Samsung PM883 1.92TB as OSD with Raid0 behind PERC controller and
Journal and Data will be on the same drive.
Does anyone have a similar setup? Any hints or tips would be appreciated.
BR
Max
___
ceph-users mailing list -- ceph-users@ce
I will tell you of our experience
Dell perc controllers with HDD and separate Intel NVMe for journals etc
With the Disk first behind the controller with caching enabled and it set as a
raid0 and the OSDs were encrypted everything was good.
When we upgrade to LVM and still encrypted and sti
HI all,
I'm looking at doing a Luminous to Nautilus upgrade. I'd like to
assimilate the config into the mon db. However we do have hosts with
differing [osd] config sections in their current ceph.conf files. I was
looking at using the crush type host:xxx to set these differently if
required.
Wow, so supervised! Words cannot express my thanks for you, yantao!
I send you a mail with my detail questions, would you please help to check.
Thanks a ton
Thanks,
Xiong
> 在 2021年11月26日,上午10:47,xueyantao2114 写道:
>
> First, thanks for you question. Alluxio underfs ceph and ce
Hi Mark,
I have noticed exactly the same thing on Nautilus where host didn't
work but chassis did work. I posted to this mailing list a few weeks
ago.
It's very strange that the host filter is not working. I also could
not find any errors logged for this, so it looks like it's just
ignoring the set
Thank you for the confirmation. In my case using either the device class
(class:hdd vs ssd) or a top level root (default vs default-ssd) might be
good enough. But we *do* have hosts with differing amounts of memory/
too, so it would be great if this could be fixed and patched!
Sorry - while I