Re: [ceph-users] ceph-deploy with --release (--stable) for dumpling?

2014-09-02 Thread Wang, Warren
We've chosen to use the gitbuilder site to make sure we get the same version when we rebuild nodes, etc. http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ So our sources list looks like: deb http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/v0.80.5 precise main Warren -

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-02 Thread Wang, Warren
Hi Sebastien, Something I didn't see in the thread so far, did you secure erase the SSDs before they got used? I assume these were probably repurposed for this test. We have seen some pretty significant garbage collection issue on various SSD and other forms of solid state storage to the point

Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3, 2K IOPS

2014-09-05 Thread Wang, Warren
imes, for your case, if the SSD still delivers write IOPS specified by the manufacturer, it will doesn't help in any ways. But it seems this practice is nowadays increasingly used. Cheers On 02 Sep 2014, at 18:23, Wang, Warren <mailto:warren_w...@cable.comcast.com> wrote: Hi Sebastien,

Re: [ceph-users] HDFS on Ceph (RBD)

2015-05-20 Thread Wang, Warren
We¹ve contemplated doing something like that, but we also realized that it would result in manual work in Ceph everytime we lose a drive or server, and a pretty bad experience for the customer when we have to do maintenance. We also kicked around the idea of leveraging the notion of a Hadoop rack

Re: [ceph-users] HDFS on Ceph (RBD)

2015-05-21 Thread Wang, Warren
On 5/21/15, 5:04 AM, "Blair Bethwaite" wrote: >Hi Warren, > >On 20 May 2015 at 23:23, Wang, Warren >wrote: >> We¹ve contemplated doing something like that, but we also realized that >> it would result in manual work in Ceph everytime we lose a drive or >>

Re: [ceph-users] Discuss: New default recovery config settings

2015-06-01 Thread Wang, Warren
Hi Mark, I don¹t suppose you logged latency during those tests, did you? I¹m one of the folks, as Bryan mentioned, that advocates turning these values down. I¹m okay with extending recovery time, especially when we are talking about a default of 3x replication, with the trade off of better client r

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-09 Thread Wang, Warren
You'll take a noticeable hit on write latency. Whether or not it's tolerable will be up to you and the workload you have to capture. Large file operations are throughput efficient without an SSD journal, as long as you have enough spindles. About the Intel P3700, you will only need 1 to keep up

Re: [ceph-users] ceph tell not persistent through reboots?

2015-08-06 Thread Wang, Warren
Injecting args into the running procs is not meant to be persistent. You'll need to modify /etc/ceph/ceph.conf for that. Warren -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve Dainard Sent: Thursday, August 06, 2015 9:16 PM To: ceph-user

Re: [ceph-users] PCIE-SSD OSD bottom performance issue

2015-08-22 Thread Wang, Warren
Are you running fio against a sparse file, prepopulated file, or a raw device? Warren From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of scott_tan...@yahoo.com Sent: Thursday, August 20, 2015 3:48 AM To: ceph-users Cc: liuxy666 Subject: [ceph-users] PCIE-SSD OSD bottom pe

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-08-31 Thread Wang, Warren
Hey Kenneth, it looks like you¹re just down the tollroad from me. I¹m in Reston Town Center. Just as a really rough estimate, I¹d say this is your max IOPS: 80 IOPS/spinner * 6 drives / 3 replicas = 160ish max sustained IOPS It¹s more complicated than that, since you have a reasonable solid state

Re: [ceph-users] Storage node refurbishing, a "freeze" OSD feature would be nice

2015-08-31 Thread Wang, Warren
When we know we need to off a node, we weight it down over time. Depending on your cluster, you may need to do this over days or hours. In theory, you could do the same when putting OSDs in, by setting noin, and then setting weight to something very low, and going up over time. I haven¹t tried thi

Re: [ceph-users] Moving/Sharding RGW Bucket Index

2015-09-01 Thread Wang, Warren
I added sharding to our busiest RGW sites, but it will not shard existing bucket indexes, only applies to new buckets. Even with that change, I'm still considering moving the index pool to SSD. The main factor being the rate of writes. We are looking at a project that will have extremely high wr

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-09-01 Thread Wang, Warren
Be selective with the SSDs you choose. I personally have tried Micron M500DC, Intel S3500, and some PCIE cards that would all suffice. There are MANY that do not work well at all. A shockingly large list, in fact. Intel 3500/3700 are the gold standards. Warren From: ceph-users [mailto:ceph-use

[ceph-users] Washington DC area: Ceph users meetup, 12/18

2013-12-09 Thread Wang, Warren
Hi folks, I know it's short notice, but we have recently formed a Ceph users meetup group in the DC area. We have our first meetup on 12/18. We should have more notice before the next one, so please join the meetup group, even if you can't make this one! http://www.meetup.com/Ceph-DC/events/

Re: [ceph-users] Ceph performance, empty vs part full

2015-09-03 Thread Wang, Warren
I'm about to change it on a big cluster too. It totals around 30 million, so I'm a bit nervous on changing it. As far as I understood, it would indeed move them around, if you can get underneath the threshold, but it may be hard to do. Two more settings that I highly recommend changing on a big

Re: [ceph-users] high density machines

2015-09-03 Thread Wang, Warren
In the minority on this one. We have a number of the big SM 72 drive units w/ 40 Gbe. Definitely not as fast as even the 36 drive units, but it isn't awful for our average mixed workload. We can exceed all available performance with some workloads though. So while we can't extract all the perfo

Re: [ceph-users] Impact add PG

2015-09-04 Thread Wang, Warren
Sadly, this is one of those things that people find out after running their first production Ceph cluster. Never run with the defaults. I know it's been recently reduced to 3 and 1 or 1 and 3, I forget, but I would advocate 1 and 1. Even that will cause a tremendous amount of traffic with any re