On Thu, Oct 01, 2015 at 10:01:03PM -0400, J David wrote:
> So, do medium-sized IT organizations (i.e. those without the resources
> to have a Ceph developer on staff) run Hammer-based deployments in
> production successfully?
I'm not sure if I count, given that I'm now working at DreamHost as the
i
ead/write the performance numbers would
>>be ~6/7 X slower.
>> So, it is very much dependent on your workload/application access
>>pattern and obviously the cost you are willing to spend.
>>
>> Thanks & Regards
>> Somnath
>>
>> -Original Message---
Thanks & Regards
> Somnath
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
> Nelson
> Sent: Wednesday, September 30, 2015 12:04 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph, SSD, and NVMe
son
Sent: Wednesday, September 30, 2015 12:04 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph, SSD, and NVMe
On 09/30/2015 09:34 AM, J David wrote:
> Because we have a good thing going, our Ceph clusters are still
> running Firefly on all of our clusters including our largest, al
On 09/30/2015 09:34 AM, J David wrote:
Because we have a good thing going, our Ceph clusters are still
running Firefly on all of our clusters including our largest, all-SSD
cluster.
If I understand right, newer versions of Ceph make much better use of
SSDs and give overall much higher performanc
1-2809.html
>
> Regards,
> James
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of J David
> Sent: Wednesday, September 30, 2015 7:35 AM
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Ceph, SSD, and N
015 7:35 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph, SSD, and NVMe
Because we have a good thing going, our Ceph clusters are still running Firefly
on all of our clusters including our largest, all-SSD cluster.
If I understand right, newer versions of Ceph make much better use of
Because we have a good thing going, our Ceph clusters are still
running Firefly on all of our clusters including our largest, all-SSD
cluster.
If I understand right, newer versions of Ceph make much better use of
SSDs and give overall much higher performance on the same equipment.
However, the imp