Hi Marc,
Hi Vitaliy, just saw you recommend someone to use ssd, and wanted to
use
the oppurtunaty to thank you for composing this text[0], enoyed reading
it.
- What do you mean with: bad-SSD-only?
A cluster consisting only of bad SSDs, like desktop ones :) their
latency with fsync is almost
Hi Vitaliy, just saw you recommend someone to use ssd, and wanted to use
the oppurtunaty to thank you for composing this text[0], enoyed reading
it.
- What do you mean with: bad-SSD-only?
- Is this patch[1] in a Nautilus release?
[0]
https://yourcmc.ru/wiki/Ceph_performance
[1]
https://git
Option 1 is the official way, option 2 will be a lot faster if it works for
you (I was never in the situation requiring this so can't say) and option 3
is for filestore and not applicable to bluestore
On Wed, 10 Jul 2019 at 07:55, Davis Mendoza Paco
wrote:
> What would be the most appropriate pr
What would be the most appropriate procedure to move blockdb/wal to SSD?
1.- remove the OSD and recreate it (affects the performance)
ceph-volume lvm prepare --bluestore --data --block.wal
--block.db
2.- Follow the documentation
http://heiterbiswolkig.blogs.nde.ag/2018/04/08/migrating-bluestor
One thing to keep in mind is that the blockdb/wal becomes a Single Point Of
Failure for all OSDs using it. So if that SSD dies essentially you have to
consider all OSDs using it as lost. I think most go with something like 4-8
OSDs per blockdb/wal drive but it really depends how risk-averse you are
Just set 1 or more SSDs for bluestore, as long as you're within the 4% rule
I think it should be enough.
On Fri, Jul 5, 2019 at 7:15 AM Davis Mendoza Paco
wrote:
> Hi all,
> I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
> supports up to 16HD and I'm only using 9
>
> I w
Hi all,
I have installed ceph luminous, witch 5 nodes(45 OSD), each OSD server
supports up to 16HD and I'm only using 9
I wanted to ask for help to improve IOPS performance since I have about 350
virtual machines of approximately 15 GB in size and I/O processes are very
slow.
You who recommend me?
Hi Cephers,
In case you missed the Ceph Performance Weekly of April 26th 2018, it
is now up on our YouTube Channel:
https://youtu.be/I_TxLKiYLCw
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Community Manager
Open Source and Standards Team
___
ceph-us
On Mon, Apr 2, 2018 at 11:18 AM Robert Stanford
wrote:
>
> This is a known issue as far as I can tell, I've read about it several
> times. Ceph performs great (using radosgw), but as the OSDs fill up
> performance falls sharply. I am down to half of empty performance with
> about 50% disk usag
This is a known issue as far as I can tell, I've read about it several
times. Ceph performs great (using radosgw), but as the OSDs fill up
performance falls sharply. I am down to half of empty performance with
about 50% disk usage.
My questions are: does adding more OSDs / disks to the cluster
On 05/04/17 13:37, Fuxion Cloud wrote:
> Hi,
>
> Our ceph version is 0.80.7. We used it with the openstack as a block
> storage RBD. The ceph storage configured with 3 replication of data.
> I'm getting low IOPS (400) from fio benchmark in random readwrite.
> Please advise how to improve it. Thank
Hi,
Our ceph version is 0.80.7. We used it with the openstack as a block
storage RBD. The ceph storage configured with 3 replication of data. I'm
getting low IOPS (400) from fio benchmark in random readwrite. Please
advise how to improve it. Thanks.
Here's the hardware info.
12 x storage nodes
-
On Thu, May 4, 2017 at 7:53 PM, Fuxion Cloud wrote:
> Hi all,
>
> Im newbie in ceph technology. We have ceph deployed by vendor 2 years ago
> with Ubuntu 14.04LTS without fine tuned the performance. I noticed that the
> performance of storage is very slow. Can someone please help to advise how
>
Hi all,
Im newbie in ceph technology. We have ceph deployed by vendor 2 years ago
with Ubuntu 14.04LTS without fine tuned the performance. I noticed that the
performance of storage is very slow. Can someone please help to advise how
to improve the performance?
Any changes or configuration requir
Hello,
I have a ceph cluster with 25% OSDs ( 200 OSDs in a cluster and 50 OSDs
are above 80% ) filled with data. Is this (25% of OSDs filled above 80%)
causes the ceph clusetr slowness (write operations slow)? Any hint will
help?
Thnanks
Swami
___
ce
Hi,
if you wrote from an client, the data was written in an (or more)
Placement Group in 4MB-Chunks. This PGs are written to journal and the
osd-disk and due this the data are in the linux file buffer on the
osd-node too (until the os need the storage for other data (file buffer
or anything el
oughput.
> >
> >
> >
> > Thanks & Regards
> >
> > Somnath
> >
> > *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com
> > <mailto:ceph-users-boun...@lists.ceph.com>] *On Behalf Of *V Plus
> > *Sent:* Sund
>
> Thanks & Regards
>
> Somnath
>
> *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com
> <mailto:ceph-users-boun...@lists.ceph.com>] *On Behalf Of *V Plus
> *Sent:* Sunday, December 11, 2016 5:44 PM
> *To:* ceph-users@list
: Re: [ceph-users] Ceph performance is too good (impossible..)...
Thanks!
One more question, what do you mean by "bigger" ?
Do you mean that bigger block size (say, I will run read test with bs=4K, then
I need to first write the rbd with bs>4K?)? or size that is big enough to co
estore backend added advantage of preconditioning rbd will be
> the files in the filesystem will be created beforehand.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* V Plus [mailto:v.plussh...@gmail.com]
> *Sent:* Sunday, December 11, 2016 6:01 PM
>
be created beforehand.
Thanks & Regards
Somnath
From: V Plus [mailto:v.plussh...@gmail.com]
Sent: Sunday, December 11, 2016 6:01 PM
To: Somnath Roy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance is too good (impossible..)...
Thanks Somnath!
As you recommended, I exec
e got *bw=1162.7MB/s*, in b.txt, we get
> *bw=3579.6MB/s*.
>
> mostly, due to your kernel buffer of client host
>
>
> -- Original --
> *From: * "Somnath Roy";
> *Date: * Mon, Dec 12, 2016 09:47 AM
> *To: * "V Plus"; "CEPH l
t;
> Somnath
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *V Plus
> *Sent:* Sunday, December 11, 2016 5:44 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Ceph performance is too good (impossible..)...
>
>
>
>
ot;; "CEPH list";
Subject: Re: [ceph-users] Ceph performance is too good (impossible..)...
Fill up the image with big write (say 1M) first before reading and you should
see sane throughput.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.co
Fill up the image with big write (say 1M) first before reading and you should
see sane throughput.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of V Plus
Sent: Sunday, December 11, 2016 5:44 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-u
Hi Guys,
we have a ceph cluster with 6 machines (6 OSD per host).
1. I created 2 images in Ceph, and map them to another host A (*outside *the
Ceph cluster). On host A, I got */dev/rbd0* and* /dev/rbd1*.
2. I start two fio job to perform READ test on rbd0 and rbd1. (fio job
descriptions can be foun
keep you Inormed if I find something ...
Sent from my Samsung device
Original message
From: Kevin Olbrich
Date: 11/25/16 19:19 (GMT+05:30)
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph performance laggy (requests blocked > 32) on
OpenStack
Hi,
we are running
If I use slow HDD, I can get the same outcome. Placing journals on fast SAS or
NVMe SSD will make a difference. If you are using SATA SSD, those SSD are much
slower. Instead of guessing why Ceph is lagging, have you looked at ceph -w and
iostat and vmstat reports during your tests? Io stat will
Hi,
we are running 80 VMs using KVM in OpenStack via RBD in Ceph Jewel on a
total of 53 disks (RAID parity already excluded).
Our nodes are using Intel P3700 DC-SSDs for journaling.
Most VMs are linux based and load is low to medium. There are also about 10
VMs running Windows 2012R2, two of them
I am using O_DIRECT=1
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Wednesday, July 27, 2016 8:33 AM
To: EP Komarla ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance pattern
Ok. Are you using O_DIRECT? That will disable readahead on the
, EP Komarla wrote:
I am using aio engine in fio.
Fio is working on rbd images
- epk
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Tuesday, July 26, 2016 6:27 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cep
I am using aio engine in fio.
Fio is working on rbd images
- epk
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Tuesday, July 26, 2016 6:27 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance pattern
lists.ceph.com>] On Behalf Of Somnath Roy
> Sent: Tuesday, July 26, 2016 5:08 PM
> To: EP Komarla; ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Ceph performance pattern
>
> Not exactly, but, we are seeing some drop with 256K compare
Hi epk,
Which ioengine are you using? if it's librbd, you might try playing
with librbd readahead as well:
# don't disable readahead after a certain number of bytes
rbd readahead disable after bytes = 0
# Set the librbd readahead to whatever:
rbd readahead max bytes = 4194304
If it's with k
how did you deploy ceph jewel on debian7?
2016-07-26 1:08 GMT+08:00 Mark Nelson :
> Several years ago Mark Kampe proposed doing something like this. I was
> never totally convinced we could make something accurate enough quickly
> enough for it to be useful.
>
> If I were to attempt it, I would
y
Sent: Tuesday, July 26, 2016 5:08 PM
To: EP Komarla; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance pattern
Not exactly, but, we are seeing some drop with 256K compare to 64K. This is
with random reads though in Ubuntu. We had to bump up read_ahead_kb from
default 128KB to
016 4:38 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Ceph performance pattern
Hi,
I am showing below fio results for Sequential Read on my Ceph cluster. I am
trying to understand this pattern:
- why there is a dip in the performance for block si
?
Try setting bigger read_ahead_kb for sequential runs.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP
Komarla
Sent: Tuesday, July 26, 2016 4:38 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-u
Which OS/kernel you are running with ?
Try setting bigger read_ahead_kb for sequential runs.
Thanks & Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP
Komarla
Sent: Tuesday, July 26, 2016 4:38 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users]
Hi,
I am showing below fio results for Sequential Read on my Ceph cluster. I am
trying to understand this pattern:
- why there is a dip in the performance for block sizes 32k-256k?
- is this an expected performance graph?
- have you seen this kind of pattern before
[cid:image001.png@01D1E75C.2
Several years ago Mark Kampe proposed doing something like this. I was
never totally convinced we could make something accurate enough quickly
enough for it to be useful.
If I were to attempt it, I would probably start out with a multiple
regression approach based on seemingly important confi
Team,
Have a performance related question on Ceph.
I know performance of a ceph cluster depends on so many factors like type of
storage servers, processors (no of processor, raw performance of processor),
memory, network links, type of disks, journal disks, etc. On top of the
hardware feature
Hi Denver,
its like christian said. On top of that, i would add, that iSCSI is
always a more native protocol. You dont have to go through as much
layers as you have it -per design- with a software defined storage.
So you can expect always a better performance with hardware accelerated
iSCSI.
If
Hello,
On Wed, 22 Jun 2016 11:09:46 +1200 Denver Williams wrote:
> Hi All
>
>
> I'm planning an Open-stack Private Cloud Deplyment and I'm trying to
> Decide what would be the Better Option?
>
> What would the Performance Advantages/Disadvantages be when comparing a
> 3 Node Ceph Setup with 1
Hi All
I'm planning an Open-stack Private Cloud Deplyment and I'm trying to
Decide what would be the Better Option?
What would the Performance Advantages/Disadvantages be when comparing a
3 Node Ceph Setup with 15K/12G SAS Drives in an HP Dl380p G8 Server with
SSDs for Write Cache, compared to s
number of good discussions relating to
endurance, and suitability as a journal device.
From: Sergio A. de Carvalho Jr. [mailto:scarvalh...@gmail.com]
Sent: Thursday, April 07, 2016 11:18 AM
To: Alan Johnson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance expectations
Thanks
Thanks, Mark.
Yes, we're using XFS and 3-replication, although we might switch to
2-replication since we're not too worried about resiliency.
I did some test on single disks with dd, and am able to get about 152 MB/s
writes and 191 MB/s reads from a single disk. I also run the same test on
all 13
com<mailto:ceph-users-boun...@lists.ceph.com>]
On Behalf Of Sergio A. de Carvalho Jr.
Sent: Thursday, April 07, 2016 5:01 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Ceph performance expectations
Hi all,
I've setup a testing/development
alho Jr.
> *Sent:* Thursday, April 07, 2016 5:01 AM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Ceph performance expectations
>
>
>
> Hi all,
>
>
>
> I've setup a testing/development Ceph cluster consisting of 5 Dell
> PowerEdge R720xd server
Hi Sergio
On 04/07/2016 07:00 AM, Sergio A. de Carvalho Jr. wrote:
Hi all,
I've setup a testing/development Ceph cluster consisting of 5 Dell
PowerEdge R720xd servers (256GB RAM, 2x 8-core Xeon E5-2650 @ 2.60 GHz,
dual-port 10Gb Ethernet, 2x 900GB + 12x 4TB disks) running CentOS 6.5
and Ceph Ha
.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sergio
A. de Carvalho Jr.
Sent: Thursday, April 07, 2016 5:01 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph performance expectations
Hi all,
I've setup a testing/development Ceph cluster consisting of 5
Hi all,
I've setup a testing/development Ceph cluster consisting of 5 Dell
PowerEdge R720xd servers (256GB RAM, 2x 8-core Xeon E5-2650 @ 2.60 GHz,
dual-port 10Gb Ethernet, 2x 900GB + 12x 4TB disks) running CentOS 6.5 and
Ceph Hammer 0.94.6. All servers use one 900GB disk for the root partition
and
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 15 September 2015 00:09
> To: Nick Fisk ; Samuel Just
> Cc: Shinobu Kinjo ; GuangYang
> ; ceph-users
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> It
06 September 2015 15:11
>> To: 'Shinobu Kinjo' ; 'GuangYang'
>>
>> Cc: 'ceph-users' ; 'Nick Fisk'
>> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>>
>> Just a quick update after up'ing the threshol
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: 06 September 2015 15:11
> To: 'Shinobu Kinjo' ; 'GuangYang'
>
> Cc: 'ceph-users' ; 'Nick Fisk'
> Subject: R
sers-boun...@lists.ceph.com] On Behalf Of
> Shinobu Kinjo
> Sent: 05 September 2015 01:42
> To: GuangYang
> Cc: ceph-users ; Nick Fisk
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> Very nice.
> You're my hero!
>
> Shinobu
>
> --
Very nice.
You're my hero!
Shinobu
- Original Message -
From: "GuangYang"
To: "Shinobu Kinjo"
Cc: "Ben Hines" , "Nick Fisk" , "ceph-users"
Sent: Saturday, September 5, 2015 9:40:06 AM
Subject
> Date: Fri, 4 Sep 2015 20:31:59 -0400
> From: ski...@redhat.com
> To: yguan...@outlook.com
> CC: bhi...@gmail.com; n...@fisk.me.uk; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
>> II
ot;Ben Hines" , "Nick Fisk"
Cc: "ceph-users"
Sent: Saturday, September 5, 2015 9:27:31 AM
Subject: Re: [ceph-users] Ceph performance, empty vs part full
IIRC, it only triggers the move (merge or split) when that folder is hit by a
request, so most likely it happens graduall
ig prod
>>> cluster. I'm in favor of bumping these two up in the defaults.
>>>
>>> Warren
>>>
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Mark Nelson
>>>
if that helps to
> bring things back into order.
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Wang, Warren
>> Sent: 04 September 2015 01:21
>> To: Mark Nelson ; Ben Hines
>> Cc: ceph-users
>> Subj
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>> Nick Fisk
>>> Sent: 04 September 2015 13:08
>>> To: 'Wang, Warren' ; 'Mark Nelson'
>>> ; 'Ben Hines'
>>> Cc: 'ceph-users'
>>> Subject
ck Fisk
Sent: 04 September 2015 13:08
To: 'Wang, Warren' ; 'Mark Nelson'
; 'Ben Hines'
Cc: 'ceph-users'
Subject: Re: [ceph-users] Ceph performance, empty vs part full
I've just made the same change ( 4 and 40 for now) on my cluster which is a
similar
al Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Nick Fisk
> Sent: 04 September 2015 13:08
> To: 'Wang, Warren' ; 'Mark Nelson'
> ; 'Ben Hines'
> Cc: 'ceph-users'
> Subject: Re: [ceph-users] Ce
s-boun...@lists.ceph.com] On Behalf Of
> Mark Nelson
> Sent: Thursday, September 03, 2015 6:04 PM
> To: Ben Hines
> Cc: ceph-users
> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>
> Hrm, I think it will follow the merge/split rules if it's out of whack
g on a big prod cluster. I'm
in favor of bumping these two up in the defaults.
Warren
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Thursday, September 03, 2015 6:04 PM
To: Ben Hines
Cc: ceph-users
Subject: Re: [c
Hrm, I think it will follow the merge/split rules if it's out of whack
given the new settings, but I don't know that I've ever tested it on an
existing cluster to see that it actually happens. I guess let it sit
for a while and then check the OSD PG directories to see if the object
counts make
Hey Mark,
I've just tweaked these filestore settings for my cluster -- after
changing this, is there a way to make ceph move existing objects
around to new filestore locations, or will this only apply to newly
created objects? (i would assume the latter..)
thanks,
-Ben
On Wed, Jul 8, 2015 at 6:
I somehow missed the original question, but if you run a database on CEPH you
will be limited not by throughput but by latency.
Even if you run OSDs with ramdisk, the latency will still be 1-2ms at best
(depending strictly on OSD CPU and memory speed) and that limits the number of
database trans
Hello,
On Tue, 1 Sep 2015 11:50:07 -0500 Kenneth Van Alstyne wrote:
> Got it — I’ll keep that in mind. That may just be what I need to “get
> by” for now. Ultimately, we’re looking to buy at least three nodes of
> servers that can hold 40+ OSDs backed by 2TB+ SATA disks,
>
As mentioned, pick d
-users-boun...@lists.ceph.com] On Behalf Of
Kenneth Van Alstyne
Sent: Tuesday, September 01, 2015 12:50 PM
To: Robert LeBlanc
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Performance Questions with rbd images access by
qemu-kvm
Got it — I’ll keep that in mind. That may just be what
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I would caution against large OSD nodes. You can really get into a
pinch with CPU and RAM during recovery periods. I know a few people
have it working well, but it requires a lot of tuning to get it right.
Personally, 20 disks in a box are too much f
Got it — I’ll keep that in mind. That may just be what I need to “get by” for
now. Ultimately, we’re looking to buy at least three nodes of servers that can
hold 40+ OSDs backed by 2TB+ SATA disks,
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Vete
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Just swapping out spindles for SSD will not give you orders of
magnitude performance gains as it does in regular cases. This is
because Ceph has a lot of overhead for each I/O which limits the
performance of the SSDs. In my testing, two Intel S3500 S
Thanks for the awesome advice folks. Until I can go larger scale (50+ SATA
disks), I’m thinking my best option here is to just swap out these 1TB SATA
disks with 1TB SSDs. Am I oversimplifying the short term solution?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Hello,
On Mon, 31 Aug 2015 12:28:15 -0500 Kenneth Van Alstyne wrote:
In addition to the spot on comments by Warren and Quentin, verify this by
watching your nodes with atop, iostat, etc.
The culprit (HDDs) should be plainly visible.
More inline:
> Christian, et al:
>
> Sorry for the lack of
I would say you are probably simply IO starved because you're running too
many VMs.
To follow on from Warren's response, if you spread those 160 available iops
across 15 VMs, you are talking about roughly 10 iops per vm, assuming they
have similar workloads. That's almost certainly too little. I w
Hey Kenneth, it looks like you¹re just down the tollroad from me. I¹m in
Reston Town Center.
Just as a really rough estimate, I¹d say this is your max IOPS:
80 IOPS/spinner * 6 drives / 3 replicas = 160ish max sustained IOPS
It¹s more complicated than that, since you have a reasonable solid state
Christian, et al:
Sorry for the lack of information. I wasn’t sure what of our hardware
specifications or Ceph configuration was useful information at this point.
Thanks for the feedback — any feedback, is appreciated at this point, as I’ve
been beating my head against a wall trying to figure
Hello,
On Mon, 31 Aug 2015 08:31:57 -0500 Kenneth Van Alstyne wrote:
> Sorry about the repost from the cbt list, but it was suggested I post
> here as well:
>
I wasn't even aware a CBT (what the heck does that acronym stand for?)
existed...
> I am attempting to track down some performance issu
Sorry about the repost from the cbt list, but it was suggested I post here as
well:
I am attempting to track down some performance issues in a Ceph cluster
recently deployed. Our configuration is as follows:
3 storage nodes, each with:
- 8 Cores
- 64GB of
Basically for each PG, there's a directory tree where only a certain
number of objects are allowed in a given directory before it splits into
new branches/leaves. The problem is that this has a fair amount of
overhead and also there's extra associated dentry lookups to get at any
given object.
If I create a new pool it is generally fast for a short amount of time.
Not as fast as if I had a blank cluster, but close to.
Bryn
> On 8 Jul 2015, at 13:55, Gregory Farnum wrote:
>
> I think you're probably running into the internal PG/collection
> splitting here; try searching for those terms
I think you're probably running into the internal PG/collection
splitting here; try searching for those terms and seeing what your OSD
folder structures look like. You could test by creating a new pool and
seeing if it's faster or slower than the one you've already filled up.
-Greg
On Wed, Jul 8,
Hi All,
I’m perf testing a cluster again,
This time I have re-built the cluster and am filling it for testing.
on a 10 min run I get the following results from 5 load generators, each
writing though 7 iocontexts, with a queue depth of 50 async writes.
Gen1
Percentile 100 = 0.729775905609
Max
or
> > > 1024K
> > > write improve a lot. Problem is with 1024K read and 4k write .
> >
>
> > > SSD journal 810 IOPS and 810MBps
> >
>
> > > HDD journal 620 IOPS and 620 MBps
> >
>
> > > > I'l
>> I'll take a punt on it being a SATA connected SSD (most common), 5x ~130
>>> megabytes/second gets very close to most SATA bus limits. If its a shared
>>> BUS, you possibly hit that limit even earlier (since all that data is now
>>> being written twice ou
(most common), 5x ~130
>> megabytes/second gets very close to most SATA bus limits. If its a shared
>> BUS, you possibly hit that limit even earlier (since all that data is now
>> being written twice out over the bus).
>>
>> cheers;
>> \Chris
>>
>>
>> --
> *From: *"Sumit Gaur"
> *To: *ceph-users@lists.ceph.com
> *Sent: *Thursday, 12 February, 2015 9:23:35 AM
> *Subject: *[ceph-users] ceph Performance with SSD journal
>
>
> Hi Ceph-Experts,
>
> Have a small ceph architecture related question
>
> As blogs and docu
that limit even earlier
(since all that data is now being written twice out over the bus).
cheers;
\Chris
- Original Message -
From: "Sumit Gaur"
To: ceph-users@lists.ceph.com
Sent: Thursday, 12 February, 2015 9:23:35 AM
Subject: [ceph-users] ceph Performance with SSD j
Hi Ceph-Experts,
Have a small ceph architecture related question
As blogs and documents suggest that ceph perform much better if we use
journal on SSD.
I have made the ceph cluster with 30 HDD + 6 SSD for 6 OSD nodes. 5 HDD + 1
SSD on each node and each SSD have 5 partition for journaling 5 OSDs
Hi,
Just a heads up I hope , you are aware of this tool:
http://ceph.com/pgcalc/
Regards,
Vikhyat
On 02/11/2015 09:11 AM, Sumit Gaur wrote:
Hi ,
I am not sure why PG numbers have not given that much importance in
the ceph documents, I am seeing huge variation in performance number
by changin
Hi ,
I am not sure why PG numbers have not given that much importance in the
ceph documents, I am seeing huge variation in performance number by
changing PG numbers.
Just an example
*without SSD* :
36 OSD HDD => PG count 2048 gives me random write (1024K bz) performance of
550 MBps
*with SSD :*
On Sun, Feb 8, 2015 at 6:00 PM, Sumit Gaur wrote:
> Hi
> I have installed 6 node ceph cluster and doing a performance bench mark for
> the same using Nova VMs. What I have observed that FIO random write reports
> around 250 MBps for 1M block size and PGs 4096 and 650MBps for iM block size
> and PG
Hi
I have installed 6 node ceph cluster and doing a performance bench mark for
the same using Nova VMs. What I have observed that FIO random write reports
around 250 MBps for 1M block size and PGs 4096 and *650MBps for iM block
size and PG counts 2048* . Can some body let me know if I am missing a
mit Gaur"
> À: "Florent MONTHEL"
> Cc: "ceph-users"
> Envoyé: Lundi 2 Février 2015 03:54:36
> Objet: Re: [ceph-users] ceph Performance random write is more then
> sequential
>
> Hi All,
> What I saw after enabling RBD cache it is working as expe
rkload.
you'll do less ios but bigger ios to ceph, so less cpus,
- Mail original -
De: "Sumit Gaur"
À: "Florent MONTHEL"
Cc: "ceph-users"
Envoyé: Lundi 2 Février 2015 03:54:36
Objet: Re: [ceph-users] ceph Performance random write is more then
Hope this is helpful.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Sumit Gaur
> *Sent:* Sunday, February 01, 2015 6:55 PM
> *To:* Florent MONTHEL
> *Cc:* ceph-users@lists.c
nday, February 01, 2015 6:55 PM
To: Florent MONTHEL
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph Performance random write is more then sequential
Hi All,
What I saw after enabling RBD cache it is working as expected, means sequential
write has better MBps than random write.
Hi All,
What I saw after enabling RBD cache it is working as expected, means
sequential write has better MBps than random write. can somebody explain
this behaviour ? Is RBD cache setting must for ceph cluster to behave
normally ?
Thanks
sumit
On Mon, Feb 2, 2015 at 9:59 AM, Sumit Gaur wrote:
>
1 - 100 of 160 matches
Mail list logo