olc, I think you haven't posted in the ceph-users list.
On 31/12/2015 15:39, olc wrote:
> Same model _and_ same firmware (`smartctl -i /dev/sdX | grep Firmware`)? As
> far as I've been told, this can make huge differences.
Good idea indeed. I have checked, the versions are the same. Finally, af
Hi,
On 31/12/2015 15:30, Robert LeBlanc wrote:
> Because Ceph is not perfectly distributed there will be more PGs/objects in
> one drive than others. That drive will become a bottleneck for the entire
> cluster. The current IO scheduler poses some challenges in this regard.
> I've implemented a n
Hi,
On 03/01/2016 02:16, Sam Huracan wrote:
> I try restart all osd but not efficient.
> Is there anyway to apply this change transparently to client?
You can use this command (it's an example):
# In a cluster node where the admin account is available.
ceph tell 'osd.*' injectargs '--os
Hi,
I intend to add some config, but how to apply it in an production system.
[Osd]
osd journal size = 0
osd mount options xfs = "rw,noatime,inode64,logbufs=8,logbsize=256k"
filestore min sync interval = 5
filestore max sync interval = 15
filestore queue max ops = 2048
filestore queue max bytes =
Running a single node Proxmox "cluster", with Ceph on top. 1 Mon. Same node.
I have 24 HDD (no dedicated journal) and 8 SSD split via "custom crush location
hook".
Cache-Tier (SSD-OSD) for a EC-pool (HDD-OSD) providing access for proxmox via
krdb.
15 TB Capacity (Assortment of Disk sizes/speeds)
I believe so. I'm using ceph-9.2.0.
On Jan 2, 2016 9:53 AM, "Dan Nica" wrote:
> Are you using latest ceph-deploy ?
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Adam
> Sent: Saturday, January 2, 2016 4:22 AM
> To: ceph-users@lists.ceph
Are you using latest ceph-deploy ?
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Adam
Sent: Saturday, January 2, 2016 4:22 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] systemd support?
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256