Hi,
I am planning to create a new 3 node ceph storage cluster.
I will be using Cephfs + with samba for max 10 clients for upload and
download.
Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb
Nic 2 nos., 6TB SATA HDD 24 Nos. each node, OS separate SSD disk.
Earlier I ha
On 3/13/20 10:47 PM, Chip Cox wrote:
Konstantin - in your Windows environment, would it be beneficial to
have the ability to have NTFS data land as S3 (object store) on a Ceph
storage appliance? Or does it have to be NFS?
Thanks and look forward to hearing back.
Nope, for windows we use Cep
Hi.
Unless there is plans for going to Petabyte scale with it - then I really
dont see the benefits of getting CephFS involved over just an RBD image
with a VM running standard samba on top.
More performant and less complexity to handle - zero gains (by my book)
Jesper
> Hi,
>
> I am planning t
I would say you definitely need more RAM with that many disks.
On Sat, 14 Mar 2020 15:17:14 +0800 amudha...@gmail.com wrote
Hi,
I am planning to create a new 3 node ceph storage cluster.
I will be using Cephfs + with samba for max 10 clients for upload and
download.
Storage Node
Hello Chad,
starting with the Problems from lost connections with the kernel CephFS
mount to a much simpler service setup, there are plenty.
But what would be the point in stacking different tools (kernel mount, smb
service,..) untested together just because you can?
--
Martin Verges
Managing dir
Hello Amudhan,
I will be using Cephfs + with samba for max 10 clients for upload and
> download.
>
Please use samba vfs and not the kernel mount.
Earlier I have tested orchestration using ceph-deploy in the test setup.
> now, is there any other alternative to ceph-deploy?
>
Yes, try our deploym
Hello, we are running a ceph cluster + rgw on luminous 12.2.12 that serves as a
S3 compatible storage. We have noticed some buckets where the `rgw.none` from
the output of `radosgw-admin bucket stats` shows extremely large value for
`num_objects`, which is not convincible. It does look like an u
Hi,
I'm building a 4-node Proxmox cluster, with Ceph for the VM disk storage.
On each node, I have:
- 1 x 512Gb M.2 SSD (for Proxmox/boot volume)
- 1 x 960GB Intel Optane 905P (for Ceph WAL/DB)
- 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD)
I'm using the Proxmox "pveceph" command to
We are already gathering the Ceph admin socket stats with the Diamond
plugin and sending that to graphite, so I guess I just need to look through
that to find what I'm looking for.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Fri, Mar 13, 2