Hello,
I'm building a 5 server cluster over three rooms/racks. Each server has 8
960GB SSDs used as bluestore OSDs. Ceph version 12.1.2 is used.
rack1: server1(mon) server2
rack2: server3(mon) server4
rack3: server5(mon)
The crushmap was built this way:
ceph osd
Hello,
I have a fresh Proxmox installation on 5 servers (Supermciro X10SRW-F, Xeon
E5-1660 v4, 128 GB RAM) with each 8 Samsung SSD SM863 960GB connected to a
LSI-9300-8i (SAS3008) controller used as OSDs for Ceph (12.1.2)
The servers are connected to two Arista DCS-7060CX-32S switches. I'm using
m/msg35474.html
> (thread: Maybe some tuning for bonded network adapters)
>
>
>
>
> -----Original Message-
> From: Andreas Herrmann [mailto:andr...@mx20.org]
> Sent: vrijdag 8 september 2017 13:58
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] output di
Hi,
On 08.09.2017 15:59, Burkhard Linke wrote:
> On 09/08/2017 02:12 PM, Marc Roos wrote:
>>
>> Afaik ceph is is not supporting/working with bonding.
>>
>> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html
>> (thread: Maybe some tuning for bonded network adapters)
>
> CEPH wor
Hi,
On 08.09.2017 16:25, Burkhard Linke wrote:
>>> Regarding the drops (and without any experience with neither 25GBit ethernet
>>> nor the Arista switches):
>>> Do you have corresponding input drops on the server's network ports?
>> No input drops, just output drop
> Output drops on the switch ar
and I had problem with centos7 kernel 3.10 recently, with
> packet drop
>
> i have also problems with ubuntu kernel 4.10 and lacp
>
>
> kernel 4.4 or 4.12 are working fine for me.
>
>
>
>
>
> - Mail original -----
> De: "Burkhard Li
/1 on on on on 0 36948
Et18/2 on on on on 0 39628
Regards,
Andreas
On 08.09.2017 13:57, Andreas Herrmann wrote:
> Hello,
>
> I have a fresh Proxmox installation on 5 servers (Supermciro X10SRW-F,
Hi,
how could this happen:
pgs: 197528/1524 objects degraded (12961.155%)
I did some heavy failover tests, but a value higher than 100% looks strange
(ceph version 12.2.0). Recovery is quite slow.
cluster:
health: HEALTH_WARN
3/1524 objects misplaced (0.197%)
Hi Bairo,
On 12.09.2017 00:41, Blair Bethwaite wrote:
> On 12 September 2017 at 01:15, Blair Bethwaite
> wrote:
>> Flow-control may well just mask the real problem. Did your throughput
>> improve? Also, does that mean flow-control is on for all ports on the
>> switch...? IIUC, then such "global