Hi,
when trying to use df on a ceph-fuse mounted cephfs filesystem with ceph
luminous >= 12.1.3 I'm having hangs with the following kind of messages
in the logs:
2017-08-22 02:20:51.094704 7f80addb7700 0 client.174216 ms_handle_reset
on 192.168.0.10:6789/0
The logs are only showing this
Hi,
I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded.
I have 2 active mds instances and 1 standby. All the active instances
are now in replay state and show the same error in the logs:
mds1
2018-01-08 16:04:15.765637 7fc2e92451c0 0 ceph version 12.2.2
(cf0baee
Mon, 2018-01-08 at 17:21 +0100, Alessandro De Salvo wrote:
Hi,
I'm running on ceph luminous 12.2.2 and my cephfs suddenly degraded.
I have 2 active mds instances and 1 standby. All the active
instances
are now in replay state and show the same error in the logs:
mds1
2018-01-08 1
On 01/08/2018 05:40 PM, Alessandro De Salvo wrote:
> > Thanks Lincoln,
> >
> > indeed, as I said the cluster is recovering, so there are pending ops:
> >
> >
> > pgs: 21.034% pgs not active
> > 1692310/24980804 objects degraded (6.7
Hi,
we have several times a day different OSDs running Luminous 12.2.2 and
Bluestore crashing with errors like this:
starting osd.2 at - osd_data /var/lib/ceph/osd/ceph-2
/var/lib/ceph/osd/ceph-2/journal
2018-01-30 13:45:28.440883 7f1e193cbd00 -1 osd.2 107082 log_to_monitors
{default=true}
itto:
On Tue, Jan 30, 2018 at 5:49 AM Alessandro De Salvo
<mailto:alessandro.desa...@roma1.infn.it>> wrote:
Hi,
we have several times a day different OSDs running Luminous 12.2.2 and
Bluestore crashing with errors like this:
starting osd.2 at - osd_data /var/lib/ceph
Hi,
after the upgrade to luminous 12.2.6 today, all our MDSes have been
marked as damaged. Trying to restart the instances only result in
standby MDSes. We currently have 2 filesystems active and 2 MDSes each.
I found the following error messages in the mon:
mds.0 :6800/2412911269 down:dama
e damage before
issuing the "repaired" command?
What is the history of the filesystems on this cluster?
On Wed, Jul 11, 2018 at 8:10 AM Alessandro De Salvo
<mailto:alessandro.desa...@roma1.infn.it>> wrote:
Hi,
after the upgrade to luminous 12.2.6 today, all our MDS
, 2018 at 4:10 PM Alessandro De Salvo
wrote:
Hi,
after the upgrade to luminous 12.2.6 today, all our MDSes have been
marked as damaged. Trying to restart the instances only result in
standby MDSes. We currently have 2 filesystems active and 2 MDSes each.
I found the following error messages in the
controllers, but 2 of
the OSDs with 10.14 are on a SAN system and one on a different one, so I
would tend to exclude they both had (silent) errors at the same time.
Thanks,
Alessandro
Il 11/07/18 18:56, John Spray ha scritto:
On Wed, Jul 11, 2018 at 4:49 PM Alessandro De Salvo
wrote:
> Il giorno 11 lug 2018, alle ore 23:25, Gregory Farnum ha
> scritto:
>
>> On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo
>> wrote:
>> OK, I found where the object is:
>>
>>
>> ceph osd map cephfs_metadata 200.
>>
Il 12/07/18 10:58, Dan van der Ster ha scritto:
On Wed, Jul 11, 2018 at 10:25 PM Gregory Farnum wrote:
On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo
wrote:
OK, I found where the object is:
ceph osd map cephfs_metadata 200.
osdmap e632418 pool 'cephfs_metadata' (
Il 12/07/18 11:20, Alessandro De Salvo ha scritto:
Il 12/07/18 10:58, Dan van der Ster ha scritto:
On Wed, Jul 11, 2018 at 10:25 PM Gregory Farnum
wrote:
On Wed, Jul 11, 2018 at 9:23 AM Alessandro De Salvo
wrote:
OK, I found where the object is:
ceph osd map cephfs_metadata
howed up when
trying to read an object,
but not on scrubbing, that magically disappeared after restarting the
OSD.
However, in my case it was clearly related to
https://tracker.ceph.com/issues/22464 which doesn't
seem to be the issue here.
Paul
2018-07-12 13:53 GMT+02:00 Alessandr
5) Input/output
error)
Can I safely try to do the same as for object 200.? Should I
check something before trying it? Again, checking the copies of the
object, they have identical md5sums on all the replicas.
Thanks,
Alessandro
Il 12/07/18 16:46, Alessandro De Salvo ha scritto:
, Jul 12, 2018 at 11:39 PM Alessandro De Salvo
wrote:
Some progress, and more pain...
I was able to recover the 200. using the ceph-objectstore-tool for one
of the OSDs (all identical copies) but trying to re-inject it just with rados
put was giving no error while the get was still
07 PM Alessandro De Salvo
wrote:
However, I cannot reduce the number of mdses anymore, I was used to do
that with e.g.:
ceph fs set cephfs max_mds 1
Trying this with 12.2.6 has apparently no effect, I am left with 2
active mdses. Is this another bug?
Are you following this procedure?
http://docs.ceph.com
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to migrate pools.
I'm currently trying with rados export + import, but I get errors like
these:
Write #-9223372036854775808:
Hi,
Il 13/06/18 14:40, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
wrote:
Hi,
I'm trying to migrate a cephfs data pool to a different one in order to
reconfigure with new pool parameters. I've found some hints but no
specific documentation to mig
Hi,
Il 14/06/18 06:13, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 9:35 PM Alessandro De Salvo
wrote:
Hi,
Il 13/06/18 14:40, Yan, Zheng ha scritto:
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo
wrote:
Hi,
I'm trying to migrate a cephfs data pool to a different one in ord
20 matches
Mail list logo