Re: [ceph-users] Ceph health warn MDS failing to respond to cache pressure

2017-05-12 Thread José M . Martín
Hi, I'm having the same issues running MDS version 11.2.0 and kernel clients 4.10. Regards Jose El 10/05/17 a las 09:11, gjprabu escribió: > HI John, > > Thanks for you reply , we are using below version for client and > MDS (ceph version 10.2.2) > > Regards > Prabu GJ > > > On W

Re: [ceph-users] Minimize data lost with PG incomplete

2017-02-01 Thread José M . Martín
02/17 a las 14:29, José M. Martín escribió: > Hi Maxime > > I have 3 of the original disks but I don't know which OSD correspond > each one. Besides, I don't think I have enough technical skills to do > that and I don't want to go worse... > I'm trying to writ

Re: [ceph-users] Minimize data lost with PG incomplete

2017-02-01 Thread José M . Martín
pg that are down. > > Cheers, > > On 31/01/17 11:48, "ceph-users on behalf of José M. Martín" > wrote: > > Any idea of how could I recover files from the filesystem mount? > Doing a cp, it hungs when find a damaged file/folder. I would be happy >

Re: [ceph-users] Minimize data lost with PG incomplete

2017-01-31 Thread José M . Martín
Any idea of how could I recover files from the filesystem mount? Doing a cp, it hungs when find a damaged file/folder. I would be happy getting no damaged files Thanks El 31/01/17 a las 11:19, José M. Martín escribió: > Thanks. > I just realized I keep some of the original OSD. If it co

Re: [ceph-users] Minimize data lost with PG incomplete

2017-01-31 Thread José M . Martín
ut of my head, but you can try > > setting min_size to 1 for pools toreactivate some PG, if they are > > down/inactive due to missing replicas. > > > > On 17-01-31 10:24, José M. Martín wrote: > >> # ceph -s > >>

Re: [ceph-users] Minimize data lost with PG incomplete

2017-01-31 Thread José M . Martín
t; > On 17-01-31 10:24, José M. Martín wrote: >> # ceph -s >> cluster 29a91870-2ed2-40dc-969e-07b22f37928b >> health HEALTH_ERR >> clock skew detected on mon.loki04 >> 155 pgs are stuck inactive for more than 300 seconds >>

Re: [ceph-users] Minimize data lost with PG incomplete

2017-01-31 Thread José M . Martín
eph-deploy osd create loki01:/dev/sda For every disk in rack "sala1". First, I finished loki02. Then, I did this steps en loki04, loki01 and loki03 at the same time. Thanks, -- José M. Martín El 31/01/17 a las 00:43, Shinobu Kinjo escribió: > First off, the followings, please

[ceph-users] Minimize data lost with PG incomplete

2017-01-30 Thread José M . Martín
Dear list, I'm having some big problems with my setup. I was trying to increase the global capacity by changing some osds by bigger ones. I changed them without wait the rebalance process finished, thinking the replicas were saved in other buckets, but I found a lot of PGs incomplete, so replicas