Yang wrote:
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrot
Thanks Mark, my comments inline...
Date: Mon, 30 Dec 2013 07:36:56 -0600
From: Mark Nelson
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
after running some time
On 12/30/2013 05:45 AM, Guang wrote:
> Hi ceph-users and ceph-devel,
>
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrote:
> H
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrote:
> H
On 12/30/2013 05:45 AM, Guang wrote:
Hi ceph-users and ceph-devel,
Merry Christmas and Happy New Year!
We have a ceph cluster with radosgw, our customer is using S3 API to
access the cluster.
The basic information of the cluster is:
bash-4.1$ ceph -s
cluster b9cb3ea9-e1de-48b4-9e86-6921e2c53
On 12/30/2013 12:45 PM, Guang wrote:
Hi ceph-users and ceph-devel,
Merry Christmas and Happy New Year!
We have a ceph cluster with radosgw, our customer is using S3 API to
access the cluster.
The basic information of the cluster is:
bash-4.1$ ceph -s
cluster b9cb3ea9-e1de-48b4-9e86-6921e2c53
Hi ceph-users and ceph-devel,
Merry Christmas and Happy New Year!
We have a ceph cluster with radosgw, our customer is using S3 API to access the
cluster.
The basic information of the cluster is:
bash-4.1$ ceph -s
cluster b9cb3ea9-e1de-48b4-9e86-6921e2c537d2
health HEALTH_ERR 1 pgs inconsis
On 11/08/2013 12:59 PM, Gruher, Joseph R wrote:
-Original Message-
From: Dinu Vlad [mailto:dinuvla...@gmail.com]
Sent: Thursday, November 07, 2013 10:37 AM
To: ja...@peacon.co.uk; Gruher, Joseph R; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph cluster performance
I was under
>-Original Message-
>From: Dinu Vlad [mailto:dinuvla...@gmail.com]
>Sent: Thursday, November 07, 2013 10:37 AM
>To: ja...@peacon.co.uk; Gruher, Joseph R; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph cluster performance
>
>I was under the same impression -
I have 2 SSDs (same model, smaller capacity) for / connected on the mainboard.
Their sync write performance is also poor - less than 600 iops, 4k blocks.
On Nov 7, 2013, at 9:44 PM, Kyle Bader wrote:
>> ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
>
> The problem might be SATA transp
> ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
The problem might be SATA transport protocol overhead at the expander.
Have you tried directly connecting the SSDs to SATA2/3 ports on the
mainboard?
--
Kyle
___
ceph-users mailing list
ceph-user
I was under the same impression - using a small portion of the SSD via
partitioning (in my case - 30 gigs out of 240) would have the same effect as
activating the HPA explicitly.
Am I wrong?
On Nov 7, 2013, at 8:16 PM, ja...@peacon.co.uk wrote:
> On 2013-11-07 17:47, Gruher, Joseph R wrote:
On 2013-11-07 17:47, Gruher, Joseph R wrote:
I wonder how effective trim would be on a Ceph journal area.
If the journal empties and is then trimmed the next write cycle
should
be faster, but if the journal is active all the time the benefits
would be lost almost immediately, as those cells ar
-users] ceph cluster performance
In this case however, the SSDs were only used for journals and I don't know if
ceph-osd sends TRIM to the drive in the process of journaling over a block
device. They were also under-subscribed, with just 3 x 10G partitions out of
240 GB raw capacity. I did a m
>-Original Message-
>From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
>boun...@lists.ceph.com] On Behalf Of Dinu Vlad
>Sent: Thursday, November 07, 2013 3:30 AM
>To: ja...@peacon.co.uk; ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph cluster performa
I had great results from the older 530 series too.
In this case however, the SSDs were only used for journals and I don't know if
ceph-osd sends TRIM to the drive in the process of journaling over a block
device. They were also under-subscribed, with just 3 x 10G partitions out of
240 GB raw c
On 11/06/2013 03:35 PM, ja...@peacon.co.uk wrote:
On 2013-11-06 20:25, Mike Dawson wrote:
We just fixed a performance issue on our cluster related to spikes
of high latency on some of our SSDs used for osd journals. In our
case, the slow SSDs showed spikes of 100x higher latency than expecte
On 2013-11-06 20:25, Mike Dawson wrote:
We just fixed a performance issue on our cluster related to spikes
of high latency on some of our SSDs used for osd journals. In our case,
the slow SSDs showed spikes of 100x higher latency than expected.
Many SSDs show this behaviour when 100% prov
No, in our case flashing the firmware to the latest release cured the
problem.
If you build a new cluster with the slow SSDs, I'd be interested in the
results of ioping[0] or fsync-tester[1]. I theorize that you may see
spikes of high latency.
[0] https://code.google.com/p/ioping/
[1] https:
ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
By "fixed" - you mean replaced the SSDs?
Thanks,
Dinu
On Nov 6, 2013, at 10:25 PM, Mike Dawson wrote:
> We just fixed a performance issue on our cluster related to spikes of high
> latency on some of our SSDs used for osd journals. In o
We just fixed a performance issue on our cluster related to spikes of
high latency on some of our SSDs used for osd journals. In our case, the
slow SSDs showed spikes of 100x higher latency than expected.
What SSDs were you using that were so slow?
Cheers,
Mike
On 11/6/2013 12:39 PM, Dinu Vla
On 11/06/2013 11:39 AM, Dinu Vlad wrote:
I'm using the latest 3.8.0 branch from raring. Is there a more recent/better
kernel recommended?
I've been using the 3.8 kernel in the precise repo effectively, so I
suspect it should be ok.
Meanwhile, I think I might have identified the culprit -
I'm using the latest 3.8.0 branch from raring. Is there a more recent/better
kernel recommended?
Meanwhile, I think I might have identified the culprit - my SSD drives are
extremely slow on sync writes, doing 5-600 iops max with 4k blocksize. By
comparison, an Intel 530 in another server (also
Ok, some more thoughts:
1) What kernel are you using?
2) Mixing SATA and SAS on an expander backplane can some times have bad
effects. We don't really know how bad this is and in what
circumstances, but the Nexenta folks have seen problems with ZFS on
solaris and it's not impossible linux ma
Ok, so after tweaking the deadline scheduler and the filestore_wbthrottle* ceph
settings I was able to get 440 MB/s from 8 rados bench instances, over a single
osd node (pool pg_num = 1800, size = 1)
This still looks awfully slow to me - fio throughput across all disks reaches
2.8 GB/s!!
I'd
Any other options or ideas?
Thanks,
Dinu
On Oct 31, 2013, at 6:35 PM, Dinu Vlad wrote:
>
> I tested the osd performance from a single node. For this purpose I deployed
> a new cluster (using ceph-deploy, as before) and on fresh/repartitioned
> drives. I created a single pool, 1800 pgs. I
I tested the osd performance from a single node. For this purpose I deployed a
new cluster (using ceph-deploy, as before) and on fresh/repartitioned drives. I
created a single pool, 1800 pgs. I ran the rados bench both on the osd server
and on a remote one. Cluster configuration stayed "default
On 10/30/2013 01:51 PM, Dinu Vlad wrote:
Mark,
The SSDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/enterprise-sata-ssd/?sku=ST240FN0021
and the HDDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/constellation/?sku=ST91000640SS.
The
Mark,
The SSDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/enterprise-sata-ssd/?sku=ST240FN0021
and the HDDs are
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/constellation/?sku=ST91000640SS.
The chasis is a "SiliconMechanics C602" - bu
On 10/30/2013 09:05 AM, Dinu Vlad wrote:
Hello,
I've been doing some tests on a newly installed ceph cluster:
# ceph osd create bench1 2048 2048
# ceph osd create bench2 2048 2048
# rbd -p bench1 create test
# rbd -p bench1 bench-write test --io-pattern rand
elapsed: 483 ops: 396579 ops/s
Hello,
I've been doing some tests on a newly installed ceph cluster:
# ceph osd create bench1 2048 2048
# ceph osd create bench2 2048 2048
# rbd -p bench1 create test
# rbd -p bench1 bench-write test --io-pattern rand
elapsed: 483 ops: 396579 ops/sec: 820.23 bytes/sec: 2220781.36
# rad
31 matches
Mail list logo