It doesn't matter anymore. MDS has crushed and it is stuck in rejoin
state in rejoin state.
Now, thinking in delete the pool and start again. Is it safe or
advisable use an erasured code pool for CephFS?
Thank you very much for your time. I like very much this software.
Cheers,
José
El 01/02/17
Hi Maxime
I have 3 of the original disks but I don't know which OSD correspond
each one. Besides, I don't think I have enough technical skills to do
that and I don't want to go worse...
I'm trying to write a script that copy files from the damaged CephFS to
a new location.
Any help will be very gr
Hi José
If you have some of the original OSDs (not zapped or erased) then you might be
able to just re-add them to your cluster and have a happy cluster.
If you attempt the ceph_objectstore_tool –op export & import make sure to do it
on a temporary OSD of weight 0 as recommended in the link prov
Any idea of how could I recover files from the filesystem mount?
Doing a cp, it hungs when find a damaged file/folder. I would be happy
getting no damaged files
Thanks
El 31/01/17 a las 11:19, José M. Martín escribió:
> Thanks.
> I just realized I keep some of the original OSD. If it contains som
Thanks.
I just realized I keep some of the original OSD. If it contains some of
the incomplete PGs , would be possible to add then into the new disks?
Maybe following this steps? http://ceph.com/community/incomplete-pgs-oh-my/
El 31/01/17 a las 10:44, Maxime Guyot escribió:
> Hi José,
>
> Too late
Hi José,
Too late, but you could have updated the CRUSHmap *before* moving the disks.
Something like: “ceph osd crush set osd.0 0.90329 root=default rack=sala2.2
host=loki05” would move the osd.0 to loki05 and would trigger the appropriate
PG movements before any physical move. Then the physic
Already min_size = 1
Thanks,
Jose M. Martín
El 31/01/17 a las 09:44, Henrik Korkuc escribió:
> I am not sure about "incomplete" part out of my head, but you can try
> setting min_size to 1 for pools toreactivate some PG, if they are
> down/inactive due to missing replicas.
>
> On 17-01-31 10:24,
I am not sure about "incomplete" part out of my head, but you can try
setting min_size to 1 for pools toreactivate some PG, if they are
down/inactive due to missing replicas.
On 17-01-31 10:24, José M. Martín wrote:
# ceph -s
cluster 29a91870-2ed2-40dc-969e-07b22f37928b
health HEALT
# ceph -s
cluster 29a91870-2ed2-40dc-969e-07b22f37928b
health HEALTH_ERR
clock skew detected on mon.loki04
155 pgs are stuck inactive for more than 300 seconds
7 pgs backfill_toofull
1028 pgs backfill_wait
48 pgs backfilling
First off, the followings, please.
* ceph -s
* ceph osd tree
* ceph pg dump
and
* what you actually did with exact commands.
Regards,
On Tue, Jan 31, 2017 at 6:10 AM, José M. Martín wrote:
> Dear list,
>
> I'm having some big problems with my setup.
>
> I was trying to increase the global
Dear list,
I'm having some big problems with my setup.
I was trying to increase the global capacity by changing some osds by
bigger ones. I changed them without wait the rebalance process finished,
thinking the replicas were saved in other buckets, but I found a lot of
PGs incomplete, so replicas
11 matches
Mail list logo