spinners are slow anyway, but on top of that SAS disks often default to
writecache=off. In use as a single disk with no risk of raid
write-holes, you can turn on writecache. On SAS, I would assume the
firmware does not lie about writes reaching stable storage (flushes).
# turn on temporarily:
How bizarre, I haven’t dealt with this specific SKU before. Some Dell / LSI
HBAs call this passthrough mode, some “personality”, some “jbod mode”, dunno
why they can’t be consistent.
> We are testing an experimental Ceph cluster with server and controller at
> subject.
>
> The controller have
Does "ceph health detail" give any insight into what the unexpected
exception was? If not, I'm pretty confident some traceback would end up
being logged. Could maybe still grab it with "ceph log last 200 info
cephadm" if not a lot else has happened. Also, probably need to find out if
the check-host
Hi all,
I have a problem regarding upgrading Ceph cluster from Pacific to Quincy
version with cephadm. I have successfully upgraded the cluster to the
latest Pacific (16.2.11). But when I run the following command to upgrade
the cluster to 17.2.5, After upgrading 3/4 mgrs, the upgrade process stop
We are testing an experimental Ceph cluster with server and controller at
subject.
The controller have not an HBA mode, but only a 'NonRAID' mode, come sort of
'auto RAID0' configuration.
We are using SSD SATA disks (MICRON MTFDDAK480TDT) that perform very well,
and SAS HDD disks (SEAGATE ST800