Hello John,
you don't need such a big CPU, save yourself some money with a 12c/24t and
invest it in better / more disks. Same goes for memory, 128G would be
enough. Why do you install 4x 25G NIC, hard disks won't be able to use that
capacity?
In addition, you can use the 2 disks for OSDs and not
If this is Skylake the 6 channel memory architecture lends itself better to
configs such as 192GB (6 x 32) so yes even though 128GB is most likely
sufficient usng (6 x 16GB) might be too small.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Martin
Verges
Sent: Saturday
Hi Martin,
Hardware has already been aquired and was spec'd to mostly match our
current clusters which perform very well for us. I'm really just hoping to
hear from anyone who may have experience moving from filestore => bluestore
with and HDD cluster. Obviously we'll be doing testing but it's alw
> On 2. Feb 2019, at 01:25, Carlos Mogas da Silva wrote:
>
>> On 01/02/2019 22:40, Alan Johnson wrote:
>> Confirm that no pools are created by default with Mimic.
>
> I can confirm that. Mimic deploy doesn't create any pools.
https://ceph.com/community/new-luminous-pool-tags/
Yes and that’s
Hi all,
I'm looking at expanding the storage of my cluster with some external
HDDs and was looking for advice on the connection interface.
I have 3 storage nodes that are combined Ceph
monitor+manager+metadata+OSD with 1TB hard drives (HGST
HTS541010A9E680). The nodes themselves are built on Sup
Since EC2 access is needed for our OpenStack users, we enabled in OpenStack
the nova-ec2 service> In this way every user has already EC2 credentials
that can be used also for S3
PS; If you are using Ocata there is unfortunately a problem:
https://ask.openstack.org/en/question/106557/swift3s3-api-