We've chosen to use the gitbuilder site to make sure we get the same version
when we rebuild nodes, etc.
http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/
So our sources list looks like:
deb http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/v0.80.5
precise main
Warren
-
Hi Sebastien,
Something I didn't see in the thread so far, did you secure erase the SSDs
before they got used? I assume these were probably repurposed for this test. We
have seen some pretty significant garbage collection issue on various SSD and
other forms of solid state storage to the point
imes, for your case, if the SSD still delivers
write IOPS specified by the manufacturer, it will doesn't help in any ways.
But it seems this practice is nowadays increasingly used.
Cheers
On 02 Sep 2014, at 18:23, Wang, Warren
<mailto:warren_w...@cable.comcast.com> wrote:
Hi Sebastien,
We¹ve contemplated doing something like that, but we also realized that
it would result in manual work in Ceph everytime we lose a drive or
server,
and a pretty bad experience for the customer when we have to do
maintenance.
We also kicked around the idea of leveraging the notion of a Hadoop rack
On 5/21/15, 5:04 AM, "Blair Bethwaite" wrote:
>Hi Warren,
>
>On 20 May 2015 at 23:23, Wang, Warren
>wrote:
>> We¹ve contemplated doing something like that, but we also realized that
>> it would result in manual work in Ceph everytime we lose a drive or
>>
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x replication, with the trade off of better
client r
You'll take a noticeable hit on write latency. Whether or not it's tolerable
will be up to you and the workload you have to capture. Large file operations
are throughput efficient without an SSD journal, as long as you have enough
spindles.
About the Intel P3700, you will only need 1 to keep up
Injecting args into the running procs is not meant to be persistent. You'll
need to modify /etc/ceph/ceph.conf for that.
Warren
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve
Dainard
Sent: Thursday, August 06, 2015 9:16 PM
To: ceph-user
Are you running fio against a sparse file, prepopulated file, or a raw device?
Warren
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
scott_tan...@yahoo.com
Sent: Thursday, August 20, 2015 3:48 AM
To: ceph-users
Cc: liuxy666
Subject: [ceph-users] PCIE-SSD OSD bottom pe
Hey Kenneth, it looks like you¹re just down the tollroad from me. I¹m in
Reston Town Center.
Just as a really rough estimate, I¹d say this is your max IOPS:
80 IOPS/spinner * 6 drives / 3 replicas = 160ish max sustained IOPS
It¹s more complicated than that, since you have a reasonable solid state
When we know we need to off a node, we weight it down over time. Depending
on your cluster, you may need to do this over days or hours.
In theory, you could do the same when putting OSDs in, by setting noin,
and then setting weight to something very low, and going up over time. I
haven¹t tried thi
I added sharding to our busiest RGW sites, but it will not shard existing
bucket indexes, only applies to new buckets. Even with that change, I'm still
considering moving the index pool to SSD. The main factor being the rate of
writes. We are looking at a project that will have extremely high wr
Be selective with the SSDs you choose. I personally have tried Micron M500DC,
Intel S3500, and some PCIE cards that would all suffice. There are MANY that do
not work well at all. A shockingly large list, in fact.
Intel 3500/3700 are the gold standards.
Warren
From: ceph-users [mailto:ceph-use
Hi folks,
I know it's short notice, but we have recently formed a Ceph users meetup group
in the DC area. We have our first meetup on 12/18. We should have more notice
before the next one, so please join the meetup group, even if you can't make
this one!
http://www.meetup.com/Ceph-DC/events/
I'm about to change it on a big cluster too. It totals around 30 million, so
I'm a bit nervous on changing it. As far as I understood, it would indeed move
them around, if you can get underneath the threshold, but it may be hard to do.
Two more settings that I highly recommend changing on a big
In the minority on this one. We have a number of the big SM 72 drive units w/
40 Gbe. Definitely not as fast as even the 36 drive units, but it isn't awful
for our average mixed workload. We can exceed all available performance with
some workloads though.
So while we can't extract all the perfo
Sadly, this is one of those things that people find out after running their
first production Ceph cluster. Never run with the defaults. I know it's been
recently reduced to 3 and 1 or 1 and 3, I forget, but I would advocate 1 and 1.
Even that will cause a tremendous amount of traffic with any re
17 matches
Mail list logo