Il giorno sab 15 apr 2023 alle ore 11:10 Marco Gaiarin <
g...@lilliput.linux.it> ha scritto:
>
> Sorry, i'm a bit puzzled here.
>
> Matthias suggest to enable write cache, you suggest to disble it... or i'm
> cache-confused?! ;-)
>
>
>
> There is a cache in each disk, and a cache in the controller
Hi,
do you want to hear the truth from real experience?
Or the myth?
The truth is that:
- hdd are too slow for ceph, the first time you need to do a rebalance or
similar you will discover...
- if you want to use hdds do a raid with your controller and use the
controller BBU cache (do not consider c
You can use also consumer drives considering that is an homelab.
Otherwise try to find seagate nytro xm1441 or xm1440.
Mario
Il giorno lun 15 nov 2021 alle ore 14:59 Eneko Lacunza
ha scritto:
> Hi Varun,
>
> That Kingston DC grade model should work (well enough at least for a
> home lab), it has
We need more details, but are you using krbd? iothread? and so on?
Il giorno gio 6 mag 2021 alle ore 22:38 codignotto
ha scritto:
> Hello, I have 6 hosts with 12 SSD disks on each host for a total of 72 OSD,
> I am using CEPH Octopos in its latest version, the deployment was done
> using ceph ad
Il giorno lun 15 feb 2021 alle ore 15:16 mj ha
scritto:
>
>
> On 2/15/21 1:38 PM, Eneko Lacunza wrote:
> > Do you really need MLAG? (the 2x10G bandwith?). If not, just use 2
> > simple switches (Mikrotik for example) and in Proxmox use an
> > active-pasive bond, with default interface in all node
Il giorno gio 4 feb 2021 alle ore 12:19 Eneko Lacunza
ha scritto:
> Hi all,
>
> El 4/2/21 a las 11:56, Frank Schilder escribió:
> >> - three servers
> >> - three monitors
> >> - 6 osd (two per server)
> >> - size=3 and min_size=2
> > This is a set-up that I would not run at all. The first one is,
maintain size=3
OSD reached 90% ceph stopped all.
Customer VMs froze and customer lost time and some data that was not
written on disk.
So I got angry size=3 and customer still loses time and data?
> Cheers, Dan
>
>
>
>
>
>
>
>
>
>
>
>
&g
Il giorno gio 4 feb 2021 alle ore 00:33 Simon Ironside <
sirons...@caffetine.org> ha scritto:
>
>
> On 03/02/2021 19:48, Mario Giammarco wrote:
>
> To labour Dan's point a bit further, maybe a RAID5/6 analogy is better
> than RAID1. Yes, I know we're not tal
Hi Federico,
here I am not mixing raid1 with ceph. I am doing a comparison: is it safer
to have a server with raid1 disks or two servers with ceph and size=2
min_size=1 ?
We are talking about real world examples where a customer is buying a new
server and want to choose.
Il giorno gio 4 feb 2021 a
considered?
Thanks again!
Mario
Il giorno mer 3 feb 2021 alle ore 17:42 Simon Ironside <
sirons...@caffetine.org> ha scritto:
> On 03/02/2021 09:24, Mario Giammarco wrote:
> > Hello,
> > Imagine this situation:
> > - 3 servers with ceph
> > - a pool with size 2 m
Hello,
Imagine this situation:
- 3 servers with ceph
- a pool with size 2 min 1
I know perfectly the size 3 and min 2 is better.
I would like to know what is the worst thing that can happen:
- a disk breaks and another disk breaks before ceph has reconstructed
second replica, ok I lose data
- if
Hello,
if I have a pool with replica 3 what happens when one replica is corrupted?
I suppose ceph detects bad replica using checksums and replace it with good
one
If I have a pool with replica 2 what happens?
Thanks,
Mario
___
ceph-users mailing list -- c
12 matches
Mail list logo