your just assuming these drives don't perform good...
- Original Message -
From: "Mark Nelson"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 2:17:19 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
Take Greg's comments to heart, because he&
It sounds like your just assuming these drives don't perform good...
- Original Message -
From: "Mark Nelson"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 2:17:19 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
Take Greg's comments to
iginal Message -
From: "Gregory Farnum"
To: "Tyler Bishop"
Cc: "David" , "Ceph Users"
Sent: Monday, January 18, 2016 2:01:44 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
On Sun, Jan 17, 2016 at 12:34 PM, Tyler Bishop
wrote:
The chang
01:44 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
On Sun, Jan 17, 2016 at 12:34 PM, Tyler Bishop
wrote:
> The changes you are looking for are coming from Sandisk in the ceph "Jewel"
> release coming up.
>
> Based on benchmarks and testing, sandisk has really
C.
-Greg
>
> The biggest changes will come from the memory allocation with writes.
> Latency is going to be a lot lower.
>
>
> - Original Message -
> From: "David"
> To: "Wido den Hollander"
> Cc: ceph-users@lists.ceph.com
> Sent: Sunday
On 01/16/2016 12:06 PM, David wrote:
Hi!
We’re planning our third ceph cluster and been trying to find how to
maximize IOPS on this one.
Our needs:
* Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
servers)
* Pool for storage of many small files, rbd (probably dovecot mail
Check these out to:
http://www.seagate.com/internal-hard-drives/solid-state-hybrid/1200-ssd/
- Original Message -
From: "Christian Balzer"
To: "ceph-users"
Sent: Sunday, January 17, 2016 10:45:56 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
He
Hello,
On Sat, 16 Jan 2016 19:06:07 +0100 David wrote:
> Hi!
>
> We’re planning our third ceph cluster and been trying to find how to
> maximize IOPS on this one.
>
> Our needs:
> * Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
> servers)
> * Pool for storage of many smal
---
> From: "David"
> To: "Wido den Hollander"
> Cc: ceph-users@lists.ceph.com
> Sent: Sunday, January 17, 2016 6:49:25 AM
> Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
>
> Thanks Wido, those are good pointers indeed :)
> So we ju
e from the memory allocation with writes. Latency
is going to be a lot lower.
- Original Message -
From: "David"
To: "Wido den Hollander"
Cc: ceph-users@lists.ceph.com
Sent: Sunday, January 17, 2016 6:49:25 AM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSD
Thanks Wido, those are good pointers indeed :)
So we just have to make sure the backend storage (SSD/NVMe journals) won’t be
saturated (or the controllers) and then go with as many RBD per VM as possible.
Kind Regards,
David Majchrzak
16 jan 2016 kl. 22:26 skrev Wido den Hollander :
> On 01/16/
On 01/16/2016 07:06 PM, David wrote:
> Hi!
>
> We’re planning our third ceph cluster and been trying to find how to
> maximize IOPS on this one.
>
> Our needs:
> * Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
> servers)
> * Pool for storage of many small files, rbd (probabl
Hi!
We’re planning our third ceph cluster and been trying to find how to maximize
IOPS on this one.
Our needs:
* Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM servers)
* Pool for storage of many small files, rbd (probably dovecot maildir and
dovecot index etc)
So I’ve bee
13 matches
Mail list logo