Hi to the ceph-users list !
We're setting up a new Ceph infrastructure :
- 1 MDS admin node
- 4 OSD storage nodes (60 OSDs)
each of them running a monitor
- 1 client
Each 32GB RAM/16 cores OSD node supports 15 x 4TB SAS OSDs (XFS) and 1
SSD with 5GB journal partitions, all in JBOD attachement
pect that
you have more primaries on the hot nodes.
Since you're testing, try repeating the test on 3 OSD nodes instead of
4. If you don't want to run that test, you can generate a histogram
from ceph pg dump data, and see if there are more primary osds (the
first one in the acting a
n_initial_members = helga
mon_host = X.Y.Z.64
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = X.Y.0.0/16
Regards,
Frederic
Thanks & Regards
Somnath
*From:* ceph-users [mailto:ceph-users-boun..
, puppetized and dedicated to their OSD-node role.
I don't know if that's a possibility, but third way : the tools
collect/deliver wrong informations and don't show all the CPU cycles implied
Frederic
Gregory Farnum a écrit le 23/03/15 15:04 :
On Mon, Mar 23, 2015 at 4:31 AM,
has more memory
pressure as well (and that’s why tcmalloc also having that issue). Can
you give us output of ‘ceph osd tree’ to check if the load
distribution is even ? Also, check if those systems are swapping or not.
Hope this helps.
Thanks & Regards
Somnath
*From:* f...@uni
Hi,
in our quest to get the right SSD for OSD journals, I managed to
benchmark two kind of "10 DWPD" SSDs :
- Toshiba M2 PX02SMF020
- Samsung 845DC PRO
I wan't to determine if a disk is appropriate considering its absolute
performances, and the optimal number of ceph-osd processes using the S
Hi all,
I want to alert on a command we've learned to avoid for its inconsistent
results.
on Giant 0.87.1 and Hammer 0.93.0 (ceph-deploy-1.5.22-0.noarch was used
in both cases) "ceph-deploy disk list" command has a problem.
We should get an exhaustive list of devices entries, like this one
thing once before, but at the time didn't have
the chance to check if the inconsistency was coming from ceph-deploy
or from ceph-disk. This certainly seems to point at ceph-deploy!
- Travis
On Wed, Apr 8, 2015 at 4:15 AM, f...@univ-lr.fr wrote:
Hi all,
I want to alert on a command
ad on 4k blocks for the price.
The benchs were made on Dell R730xd with H730P SAS controller (LSI 3108
12GB/s SAS)
Frederic
f...@univ-lr.fr a écrit le 31/03/15 14:09 :
Hi,
in our quest to get the right SSD for OSD journals, I managed to
benchmark two kind of "10 DWPD" SSDs :
Hi all,
may there be a problem with the crush function during 'from scratch'
installation of 0.94.1-0 ?
This has been tested many times, with ceph-deploy-1.5.22-0 or
ceph-deploy-1.5.23-0. Platform RHEL7.
Each time, the new cluster ends up in a weird state never seen on my
previous installe
nt to the physical interface.
Now the question, as it compromises redundancy, is this comportment by
design ?
Frederic
f...@univ-lr.fr a écrit le 21/04/15 15:03 :
Hi all,
may there be a problem with the crush function during 'from scratch'
installation of 0.94.1-0 ?
This has
11 matches
Mail list logo