ount the ceph folder again and it worked.
The recovery is still in progress, but I guess I can sleep tight tonight.
Thanks,
X
On Tue, Feb 2, 2016 at 7:18 PM, yang wrote:
> You can try
> ceph daemon mds.host session evict
> to kill it off.
>
>
> ------ Original --
in",
"hostname": "igc-head"
}
},
{
"id": 274159,
"num_leases": 0,
"num_caps": 0,
"state": "closing",
"replay_requests": 0,
"reconnecting&qu
t of issues you are now dealing with.
>
> -Mykola
>
>
> On Tue, Feb 2, 2016 at 8:42 PM, Zhao Xu wrote:
>
> Thank you Mykola. The issue is that I/we strongly suggested to add OSD for
> many times, but we are not the decision maker.
> For now, I just want to mount the ceph
9:57 AM, Mykola Dvornik
wrote:
> I would strongly(!) suggest you to add few more OSDs to cluster before
> things get worse / corrupted.
>
> -Mykola
>
>
> On Tue, Feb 2, 2016 at 6:45 PM, Zhao Xu wrote:
>
> Hi All,
> Recently our ceph storage is running at low pe
Hi All,
Recently our ceph storage is running at low performance. Today, we can
not write to the folder. We tried to unmount the ceph storage then to
re-mount it, however, we can not even mount it now:
# mount -v -t ceph igc-head,is1,i1,i2,i3:6789:/ /mnt/igcfs/ -o
name=admin,secretfile=/etc/admi