Hi,
I have a rook-provisioned cluster to be used for RBDs only. I have 2 pools
named replicated-metadata-pool and ec-data-pool. EC parameters are 6+3.
I've been writing some data to this cluster for some time and noticed that
the reported usage is not what I was expecting.
# ceph df
RAW STORAGE:
Hi,
I'm just deploying a CephFS service.
I would like to know the expected differences between a FUSE and a kernel mount.
Why the 2 options? When should I use one and when should I use the other?
Regards,
Rodrigo Severo
___
ceph-users mailing list
Hi,
Just starting to use CephFS.
I would like to know the impacts of having one single CephFS mount X
having several.
If I have several subdirectories in my CephFS that should be
accessible to different users, with users needing access to different
sets of mounts, would it be important for me t
Hi,
Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica:
> I would like to know the expected differences between a FUSE and a kernel
> mount.
>
> Why the 2 options? When should I use one and when should I use the other?
The kernel mount code always lags behind the development process. But
ha
Em seg., 25 de nov. de 2019 às 09:57, Robert Sander
escreveu:
> Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica:
>
> > I would like to know the expected differences between a FUSE and a kernel
> > mount.
> >
> > Why the 2 options? When should I use one and when should I use the other?
>
> T
On Mon, Nov 25, 2019 at 1:57 PM Robert Sander
wrote:
>
> Hi,
>
> Am 25.11.19 um 13:36 schrieb Rodrigo Severo - Fábrica:
>
> > I would like to know the expected differences between a FUSE and a kernel
> > mount.
> >
> > Why the 2 options? When should I use one and when should I use the other?
>
>
This is the seventh bugfix release of the Mimic v13.2.x long term stable
release series. We recommend all Mimic users upgrade.
For the full release notes, see
https://ceph.io/releases/v13-2-7-mimic-released/
Notable Changes
MDS:
- Cache trimming is now throttled. Dropping the MDS cac
I have a question about ceph cache pools as documented on this page:
https://docs.ceph.com/docs/nautilus/dev/cache-pool/
Is the cache pool feature still considered a good idea? Reading some of
the email archives I find some discussion of how this caching is not
recommended anymore, for version=nau
Hi,
For your response:
"You should use not more 1Gb for WAL and 30Gb for RocksDB. Numbers !
3,30,300 (Gb) for block.db is useless.
"
Do you mean the block.db size should be 3, 30 or 300GB and nothing else?
If so, thy not?
Thanks,
Frank
___
ceph-users
If I do an fstrim /mount/fs and this is an xsf directly on a rbd device.
I can see space being freed instantly with eg. rbd du. However when
there is an lvm in between, it looks like this is not freed. I already
enabled issue_discards = 1 in lvm.conf but as the comment says, probably
only in
On 11/25/19 6:05 PM, Erdem Agaoglu wrote:
What I can't find is the 138,509 G difference between the
ceph_cluster_total_used_bytes and ceph_pool_stored_raw. This is not
static BTW, checking the same data historically shows we have about
1.12x of what we expect. This seems to make our 1.5x EC o
On 11/25/19 7:41 PM, Rodrigo Severo - Fábrica wrote:
I would like to know the impacts of having one single CephFS mount X
having several.
If I have several subdirectories in my CephFS that should be
accessible to different users, with users needing access to different
sets of mounts, would it be
On 11/26/19 4:10 AM, Frank R wrote:
Do you mean the block.db size should be 3, 30 or 300GB and nothing else?
Yes, if not - you will get data spillover of your RocksDB to slow_db at
compaction rounds.
k
___
ceph-users mailing list -- ceph-users@ce
Hi All,
I have a query regarding objecter behaviour for homeless session. In
situations when all OSDs containing copies (*let say replication 3*) of an
object are down, the objecter assigns a homeless session (OSD=-1) to a
client request. This request makes radosgw thread hang indefinitely as the
d
14 matches
Mail list logo