Thanks Wido, those are good pointers indeed :)
So we just have to make sure the backend storage (SSD/NVMe journals) won’t be
saturated (or the controllers) and then go with as many RBD per VM as possible.
Kind Regards,
David Majchrzak
16 jan 2016 kl. 22:26 skrev Wido den Hollander :
> On 01/16/
Hi,
I'm looking to implement the CephFS on my Firefly release (v0.80) with
an XFS native file system, but so far I'm having some difficulties. After
following the ceph/qsg and creating a storage cluster, I have the following
topology
admin node - mds/mon
osd1
The changes you are looking for are coming from Sandisk in the ceph "Jewel"
release coming up.
Based on benchmarks and testing, sandisk has really contributed heavily on the
tuning aspects and are promising 90%+ native iop of a drive in the cluster.
The biggest changes will come from the memory
That is indeed great news! :)
Thanks for the heads up.
Kind Regards,
David Majchrzak
17 jan 2016 kl. 21:34 skrev Tyler Bishop :
> The changes you are looking for are coming from Sandisk in the ceph "Jewel"
> release coming up.
>
> Based on benchmarks and testing, sandisk has really contribute
Based off Sebastiens design I had some thoughts:
http://www.sebastien-han.fr/images/ceph-cache-pool-compute-design.png
Hypervisors are for obvious reason more susceptible to crashes and reboots for
security updates. Since ceph is utilizing a standard pool for the cache tier it
creates a requir
Adding to this thought, even if you are using a single replica for the cache
pool, will ceph scrub the cached block against the base tier? What if you have
corruption in your cache?
From: "Tyler Bishop"
To: ceph-users@lists.ceph.com
Cc: "Sebastien han"
Sent: Sunday, January 17, 2016 3:47:
On 16/01/16 05:39, Robert LeBlanc wrote:
> If you are not booting from the GPT disk, you don't need the EFI
> partition (or any special boot partition). The required backup FAT is
> usually put at the end where there is usually some free space anyway.
> It has been a long time since I've converted
Hi Everyone,
Looking for a double check of my logic and crush map..
Overview:
- osdgroup bucket type defines failure domain within a host of 5 OSDs + 1 SSD.
Therefore 5 OSDs (all utilizing the same journal) constitute an osdgroup
bucket. Each host has 4 osdgroups.
- 6 monitors
- Two node c
Hello,
Can someone explain me the difference between df and du commands
concerning the data used in my cephfs? And which is the correct value,
958M or 4.2G?
~# du -sh /mnt/cephfs
958M/mnt/cephfs
~# df -h /mnt/cephfs/
Filesystem Size Used Avail Use% Mounted on
ce
Hello,
On Sat, 16 Jan 2016 19:06:07 +0100 David wrote:
> Hi!
>
> We’re planning our third ceph cluster and been trying to find how to
> maximize IOPS on this one.
>
> Our needs:
> * Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
> servers)
> * Pool for storage of many smal
On 18/01/2016 04:19, Francois Lafont wrote:
> ~# du -sh /mnt/cephfs
> 958M /mnt/cephfs
>
> ~# df -h /mnt/cephfs/
> Filesystem Size Used Avail Use% Mounted on
> ceph-fuse55T 4.2G 55T 1% /mnt/cephfs
Even with the option --apparent-size, the size are diff
As I understand it:
4.2G is used by ceph (all replication, metadata, et al) it is a sum of
all the space "used" on the osds.
958M is the actual space the data in cephfs is using (without replication).
3.8G means you have some sparse files in cephfs.
'ceph df detail' should return something close
12 matches
Mail list logo