n...@lists.ceph.com] On Behalf Of
>>> Martin Millnert
>>> Sent: 29 March 2015 19:58
>>> To: Mark Nelson
>>> Cc: ceph-users@lists.ceph.com
>>> Subject: Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same
>>> nodes
>>>
>>> O
ntention.
>>>
>>>> -Original Message-
>>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
>>>> Behalf Of
>>>> Martin Millnert
>>>> Sent: 29 March 2015 19:58
>>>> To: Mark Nelson
>>>> C
To: Mark Nelson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same
nodes
On Thu, Mar 26, 2015 at 12:36:53PM -0500, Mark Nelson wrote:
Having said that, small nodes are
absolutely more expensive per OSD as far as raw hardware and
power/cooling goes.
The sma
>> Sent: 29 March 2015 19:58
>> To: Mark Nelson
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same
>> nodes
>>
>> On Thu, Mar 26, 2015 at 12:36:53PM -0500, Mark Nelson wrote:
>>> Having said
; -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Martin Millnert
> Sent: 29 March 2015 19:58
> To: Mark Nelson
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] running Qemu / Hypervisor AND Ceph on the same
> n
On Thu, Mar 26, 2015 at 12:36:53PM -0500, Mark Nelson wrote:
> Having said that, small nodes are
> absolutely more expensive per OSD as far as raw hardware and
> power/cooling goes.
The smaller volume manufacturers have on the units, the worse the margin
typically (from buyers side). Also, CPUs t
Am 26.03.2015 um 16:36 schrieb Mark Nelson:
I suspect a config like this where you only have 3 OSDs per node would
be more manageable than something denser.
IE theoretically a single E5-2697v3 is enough to run 36 OSDs in a 4U
super micro chassis for a semi-dense converged solution. You could
a
We run many clusters in a similar config with shared Hypervisor/OSD/RGW/RBD
in production and in staging but we have been looking into moving our
storage to it's own cluster so that we can scale independently. We used AWS
and scaled up a ton of virtual users using JMeter clustering to test
performa
On 03/26/2015 12:13 PM, Quentin Hartman wrote:
That one big server sounds great, but it also sounds like a single point
of failure.
Absolutely, but I'm talking about folks who want dozens of these, not one.
It's also not cheap. I've been able to build this cluster
for about $1400 per node,
That one big server sounds great, but it also sounds like a single point of
failure. It's also not cheap. I've been able to build this cluster for
about $1400 per node, including the 10Gb networking gear, which is less
than what I see the _empty case_ you describe going for new. Even used, the
lowe
I suspect a config like this where you only have 3 OSDs per node would
be more manageable than something denser.
IE theoretically a single E5-2697v3 is enough to run 36 OSDs in a 4U
super micro chassis for a semi-dense converged solution. You could
attempt to restrict the OSDs to one socket a
I run a converged openstack / ceph cluster with 14 1U nodes. Each has 1 SSD
(os / journals), 3 1TB spinners (1 OSD each), 16 HT cores, 10Gb NICs for
ceph network, and 72GB of RAM. I configure openstack to leave 3GB of RAM
unused on each node for OSD / OS overhead. All the VMs are backed by ceph
vol
It's kind of a philosophical question. Technically there's nothing that
prevents you from putting ceph and the hypervisor on the same boxes.
It's a question of whether or not potential cost savings are worth
increased risk of failure and contention. You can minimize those things
through vario
A word of caution: While normally my OSDs use very little CPU, I have
occasionally had an issue where the OSDs saturate the CPU (not necessarily
during a rebuild). This might be a kernel thing, or a driver thing specific
to our hosts, but were this to happen to you, it now impacts your VMs as
well
On 26-03-15 12:04, Stefan Priebe - Profihost AG wrote:
> Hi Wido,
> Am 26.03.2015 um 11:59 schrieb Wido den Hollander:
>> On 26-03-15 11:52, Stefan Priebe - Profihost AG wrote:
>>> Hi,
>>>
>>> in the past i rwad pretty often that it's not a good idea to run ceph
>>> and qemu / the hypervisors on th
Hi Wido,
Am 26.03.2015 um 11:59 schrieb Wido den Hollander:
> On 26-03-15 11:52, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> in the past i rwad pretty often that it's not a good idea to run ceph
>> and qemu / the hypervisors on the same nodes.
>>
>> But why is this a bad idea? You save space a
On 26-03-15 11:52, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> in the past i rwad pretty often that it's not a good idea to run ceph
> and qemu / the hypervisors on the same nodes.
>
> But why is this a bad idea? You save space and can better use the
> ressources you have in the nodes anyway.
>
Hi,
in the past i rwad pretty often that it's not a good idea to run ceph
and qemu / the hypervisors on the same nodes.
But why is this a bad idea? You save space and can better use the
ressources you have in the nodes anyway.
Stefan
___
ceph-users ma
18 matches
Mail list logo