> Am 10.01.2020 um 07:10 schrieb Mainor Daly :
>
>
> Hi Stefan,
>
> before I give some suggestions, can you first describe your usecase for which
> you wanna use that setup? Also which aspects are important for you.
It’s just the backup target of another ceph Cluster to sync snapshots onc
Hi Stefan,
before I give some suggestions, can you first describe your usecase for which you wanna use that setup? Also which aspects are important for you.
Stefan Priebe - Profihost AG <
s.pri...@profihost.ag> hat am 9. Januar 2020 um 22:
It sounds like an I/O bottleneck (either max IOPS or max throughput) in
the making.
If you are looking for cold storage archival data only, then it may be
ok.(if it doesn't matter how long it takes to write the data)
If this is production data with any sort of IOPs load or data change
rate,
As a starting point the current idea is to use something like:
4-6 nodes with 12x 12tb disks each
128G Memory
AMD EPYC 7302P 3GHz, 16C/32T
128GB RAM
Something to discuss is
- EC or go with 3 replicas. We'll use bluestore with compression.
- Do we need something like Intel Optane for WAL / DB or
> Am 09.01.2020 um 16:10 schrieb Wido den Hollander :
>
>
>
>> On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Wido,
>>> Am 09.01.20 um 14:18 schrieb Wido den Hollander:
>>>
>>>
>>> On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:
Am 09.01.20 um 13:39 schrieb J
I would try to scale horizontally with smaller ceph nodes, so you have
the advantage of being able to choose an EC profile that does not
require too much overhead and you can use failure domain host.
Joachim
Am 09.01.2020 um 15:31 schrieb Wido den Hollander:
On 1/9/20 2:27 PM, Stefan Priebe
On 1/9/20 2:27 PM, Stefan Priebe - Profihost AG wrote:
> Hi Wido,
> Am 09.01.20 um 14:18 schrieb Wido den Hollander:
>>
>>
>> On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:
>>>
>>> Am 09.01.20 um 13:39 schrieb Janne Johansson:
I'm currently trying to workout a concept for a
Hi Wido,
Am 09.01.20 um 14:18 schrieb Wido den Hollander:
>
>
> On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:
>>
>> Am 09.01.20 um 13:39 schrieb Janne Johansson:
>>>
>>> I'm currently trying to workout a concept for a ceph cluster which can
>>> be used as a target for backups wh
On 1/9/20 2:07 PM, Daniel Aberger - Profihost AG wrote:
>
> Am 09.01.20 um 13:39 schrieb Janne Johansson:
>>
>> I'm currently trying to workout a concept for a ceph cluster which can
>> be used as a target for backups which satisfies the following
>> requirements:
>>
>> - approx.
Am 09.01.20 um 13:39 schrieb Janne Johansson:
>
> I'm currently trying to workout a concept for a ceph cluster which can
> be used as a target for backups which satisfies the following
> requirements:
>
> - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s
>
>
> You might ne
>
>
> I'm currently trying to workout a concept for a ceph cluster which can
> be used as a target for backups which satisfies the following requirements:
>
> - approx. write speed of 40.000 IOP/s and 2500 Mbyte/s
>
You might need to have a large (at least non-1) number of writers to get to
that s
Hello,
I'm currently trying to workout a concept for a ceph cluster which can
be used as a target for backups which satisfies the following requirements:
- approx. write speed of 40.000 IOP/s and 2500 Mbyte/s
- 500 Tbyte total available space
Does anyone we have experience with a ceph cluster of
12 matches
Mail list logo