I'm no expert but maybe another test might be iperf and watch your cpu
utilization while doing it
You can set iperf to run between a couple monitors and OSD servers
Try setting it at 1500 or your switch's stock MTU
then put the servers at 9000 and the switch at 9128 (for packet
overhead/managem
We're running journals on NVMe as well - SLES
before rebooting try deleting the links here:
/etc/systemd/system/ceph-osd.target.wants/
if we delete first it boots ok
if we don't delete the disks sometimes don't come up and we have to
ceph-disk activate all
HTH
Thanks Joe
>>> David Turne
Hi
Not to sure what you are looking for but these are the type of
performance numbers we are getting on our jewel 10.2. install
We have tweaked things up a bit to get better write performance.
all writes using fio - libio for 2 minute warm and 10 minute run
6 node cluster - spinning disk with s
Hi
What are you using your cluster for ?
Are the pools rbd by chance and images for vm's of some sort
Did you add any osds after the pools were created, or redeploy any ?
Thanks Joe
>>> Osama Hasebou 2/15/2018 6:14 AM >>>
Hi All,
I am seeing a lot of uneven distribution of data among the
I have a question about block.db and block.wal
How big should they be?
Relative to drive size or ssd size ?
Thanks Joe
>>> Michel Raabe 2/16/2018 9:12 AM >>>
Hi Peter,
On 02/15/18 @ 19:44, Jan Peters wrote:
> I want to evaluate ceph with bluestore, so I need some hardware/configure
> advi
Are you using bluestore OSDs ?
if so my thought process on this is what we are having an issue with is caching
and bluestore
see the thread on bluestore caching
"Re: [ceph-users] Best practices for allocating memory to bluestore cache"
before when we were on Jewel and filestore we could get a
rs so from the thread, but I thought it would be good to confirm that
before digging in further.
David Byte
Sr. Technology Strategist
SCE Enterprise Linux
SCE Enterprise Storage
Alliances and SUSE Embedded
db...@suse.com
918.528.4422
From: ceph-users on behalf of Joe Comeau
Date: Friday, Au
ge
Alliances and SUSE Embedded
db...@suse.com
918.528.4422
>>> "Joe Comeau" 9/1/2018 8:21 PM >>>
Yes I was referring to windows explorer copies as that is what users typically
use
but also with windows robocopy and it set to 32 threads
the difference is we may go from a peak
I'm curious about optane too
We are running Dell 730xd & 740xd with expansion chassis
12 -x 8 TB disks in the server and 12x 8 TB is the exp unit
2 x 2 TB Intel NVMe for caching in the servers (12 disks cached with wal/db on
opposite nVMe from Intel cache- so interleaved)
Intel cache running on
Hi
We're using SUSE Ent Storage - Ceph
And have Dell 730xd and expansion trays with 8 tB disks
We initially had the controller cache turned off as per ceph documentation (so
configured as jboss in Dell Bios)
We reconfigured as raid0 and use the cache now for both internal and expansion
drive
After reading Reeds comments about losing power to his data center, I
think he brings up a lot of good points.
So take Dells advice I linked into consideration with your own
environment
We also have 8TB disks with Intel P3700 for journal
Our large ups and new generators which are tested week
When I am upgrading from filestore to bluestore
or any other server maintenance for a short time
(ie high I/O while rebuilding)
ceph osd set noout
ceph osd set noscrub
ceph osd set nodeep-scrub
when finished
ceph osd unset noscrub
ceph osd unset nodeep-scrub
ceph osd unset noout
again on
Hi Dave
Have you looked at the Intel P4600 vsd the P4500
The P4600 has better random writes and a better drive writes per day I
believe
Thanks Joe
>>> 11/13/2018 8:45 PM >>>
Thanks Merrick!
I checked with Intel spec [1], the performance Intel said is,
ยท Sequential Read (up to) 500 MB
I wonder if anyone has dealt with deep-scrubbing being really heavy when it
kicks of at the defined start time?
I currently have a script that kicks off and runs deep-scrub every 10 minutes
on the oldest un-deep-scrubbed pg
this script runs 24/7 regardless of when deep scrub is scheduled
my cep
Just a note that we use SUSE for our Ceph/Vmware system
this is the general ceph docs for vmware/iscsi
https://docs.ceph.com/docs/master/rbd/iscsi-initiator-esx/
this is the SUSE docs
https://documentation.suse.com/ses/6/html/ses-all/cha-ceph-iscsi.html
they differ
I'll tell you what we'v
15 matches
Mail list logo