On Mon, Nov 27, 2017 at 03:10:09AM +, David Turner wrote:
> Disclaimer... This is slightly off topic and a genuine question. I am a
> container noobie that has only used them for test environments for nginx
> configs and ceph client multi-tenency benchmarking.
>
> I understand the benefits to
Disclaimer... This is slightly off topic and a genuine question. I am a
container noobie that has only used them for test environments for nginx
configs and ceph client multi-tenency benchmarking.
I understand the benefits to containerizing RGW, MDS, and MGR daemons. I
can even come up with a de
Hi!
I am trying to install ceph in container, but osd always failed:
[root@d32f3a7b6eb8 ~]$ ceph -s
cluster:
id: a5f1d744-35eb-4e1b-a7c7-cb9871ec559d
health: HEALTH_WARN
Reduced data availability: 128 pgs inactive
Degraded data redundancy: 128 pgs unclean
s
In filestore the journal is crucial for the operation of the OSD to
ensure consistency. If it's toast then so is the associated OSD in
most cases. I think people often overlook this fact when they share
many OSDs to a single journal drive to save cost.
On Sun, Nov 26, 2017 at 5:23 AM, Hauke Hombur
If you are too a point where you need to repair the xfs partition, you
should probably just rebuild the osd and backfill back onto it as a fresh
osd. That's even more true now that the repair had bad side effects.
On Sat, Nov 25, 2017, 11:33 AM Hauke Homburg
wrote:
> Hello List,
>
> Yesterday i
If I am not mistaken, the whole idea with the 3 replica's is dat you
have enough copies to recover from a failed osd. In my tests this seems
to go fine automatically. Are you doing something that is not adviced?
-Original Message-
From: Gonzalo Aguilar Delgado [mailto:gagui...@aguil