Is it FileStore or BlueStore? With this SSD-HDD solution, is journal
or WAL/DB on SSD or HDD? My understanding is that, there is no
benefit to put journal or WAL/DB on SSD with such solution. It will
also eliminate the single point of failure when having all WAL/DB
on one SSD. Just want to confirm.
Hi all,
I moved the crush location of 8 OSDs and rebalancing went on happily (misplaced
objects only). Today, osd.1 crashed, restarted and rejoined the cluster.
However, it seems not to re-join some PGs it was a member of. I have now
undersized PGs for no real reason I would believe:
PG_DEGRAD
Hi,
Hi,
At last, the problem fixed for now by adding cluster network IP to the
second interface.
But It looks weird why the client wants to communicate with Cluster IP.
Does anyone have an idea? why we need to provide cluster IP to client
mounting thru kernel.
Initially, when the cluster was se
Hi,
At last, the problem fixed for now by adding cluster network IP to the
second interface.
But It looks weird why the client wants to communicate with Cluster IP.
Does anyone have an idea? why we need to provide cluster IP to client
mounting thru kernel.
Initially, when the cluster was set up
NFS also works. I recommend NFS 4.1+ for performance reasons.
On Sat, Nov 7, 2020 at 4:51 AM Marco Venuti wrote:
>
> Hi,
> I have the same use-case.
> Is there some alternative to Samba in order to export CephFS to the end
> user? I am somewhat concerned with its potential security
> vulnerabilit
Hi,
I have the same use-case.
Is there some alternative to Samba in order to export CephFS to the end
user? I am somewhat concerned with its potential security
vulnerabilities, which appear to be quite frequent.
Specifically, I need server-side enforced permissions and possibly
Kerberos authenticat