Hi,
Is anyone running Ceph Luminous (12.2.0) on 32bit Linux? Have you seen
any problems?
My setup has been 1 MON and 7 OSDs (no MDS, RGW, etc), all running Jewel
(10.2.1), on 32bit, with no issues at all.
I've upgraded everything to latest version of Jewel (10.2.9) and still
no issues.
ZhengYan,
I set "mds_bal_fragment_size_max = 10, mds_bal_frag = true", then I write
10 files named 512k.file$i, but there are still some file is missing. such
as :
[root@yj43959-ceph-dev cephfs]# find ./volumes/ -type f | wc -l
91070
donglifec...@gmail.com
From: Yan, Zheng
Date:
It took a while. It appears to have cleaned up quite a bit... but still has
issues. I've been seeing below message for more than a day and cpu utilization
and io utilization is low... looks like something is stuck... I rebooted OSDs
several times when it looked like it was stuck earlier and i
Hello people,
after a series on events and some operational mistakes, 1 PG in our
cluster is
in active+recovering+degraded+remapped state, reporting 1 unfound
object.
We're running Hammer (v0.94.9) on top of Debian Jessie, on 27 nodes and
162
osds with the default crushmap and nodeep-scrub fla
Just tried and there is not much more log in ceph -w (see below) neither
from the qemu process.
[15:52:43] server4:~$ /usr/bin/qemu-system-x86_64 -name one-17031 -S
-machine pc-i440fx-2.1,accel=kvm,usb=off -m 8192 -realtime mlock=off
-smp 6,sockets=6,cores=1,threads=1 -uuid
79845fca-9b26-4072-bc
Sorry -- meant VM. Yes, librbd uses ceph.conf for configuration settings.
On Sun, Sep 10, 2017 at 9:22 AM, Nico Schottelius
wrote:
>
> Hello Jason,
>
> I think there is a slight misunderstanding:
> There is only one *VM*, not one OSD left that we did not start.
>
> Or does librbd also read ceph.c
Hello Jason,
I think there is a slight misunderstanding:
There is only one *VM*, not one OSD left that we did not start.
Or does librbd also read ceph.conf and will that cause qemu to output
debug messages?
Best,
Nico
Jason Dillaman writes:
> I presume QEMU is using librbd instead of a mapp
I presume QEMU is using librbd instead of a mapped krbd block device,
correct? If that is the case, can you add "debug-rbd=20" and "debug
objecter=20" to your ceph.conf and boot up your last remaining broken
OSD?
On Sun, Sep 10, 2017 at 8:23 AM, Nico Schottelius
wrote:
>
> Good morning,
>
> yeste
Good morning,
yesterday we had an unpleasant surprise that I would like to discuss:
Many (not all!) of our VMs were suddenly
dying (qemu process exiting) and when trying to restart them, inside the
qemu process we saw i/o errors on the disks and the OS was not able to
start (i.e. stopped in init
I'm not a huge fan of train releases, as they tend to never quite make
it on time and it always feels a bit artificial timeline anyway. OTOH,
I do see and understand the need of a predictable schedule with a
roadmap attached to it. There are many that need to have at least a
vague idea on what we'r
Hi,
I had a similar problem on jewel, where I was unable to properly delete
objects eventhough radosgw-admin returned rc 0 after issuing rm, somehow
the object was deleted but the metadata wasn't removed.
I ran
# radosgw-admin --cluster ceph object stat --bucket=weird_bucket
--object=$OBJECT
to f
11 matches
Mail list logo