: "auth
list"}]: dispatch
*Can anybody give me a hint or what I should check?*
Thanks,
--
Erming Pei, Senior System Analyst
Information Services & Technology
University of Alberta, Canada
Tel: 7804929914Fax: 7804921729
---
/check the data pool (instead of meta data)
to see if any effect on it?
Erming
--
---------
Erming Pei, Ph.D
Senior System Analyst; Grid/Cloud Specialist
Research Computing Group
Information Services & Technology
University of Alberta, Can
On 9/2/15, 9:31 AM, Gregory Farnum wrote:
[ Re-adding the list. ]
On Wed, Sep 2, 2015 at 4:29 PM, Erming Pei wrote:
Hi Gregory,
Thanks very much for the confirmation and explanation.
And I presume you have an MDS cap in there as well?
Is there a difference between set this cap and
Hi,
After I set up more than 1 mds servers, it sometimes gets stuck or
slow from client end. I tried to stop one mds and then the client end
will hang there.
I accidentally set up bal frag=true. Not sure if it matters. Later I
disabled this feature.
Is there any reason for the abo
--
-
Erming Pei, Ph.D
Senior System Analyst; Grid/Cloud Specialist
Research Computing Group
Information Services & Technology
University of Alberta, Canada
Tel: +1 7804929914Fax: +1 780492
2015 at 3:06 PM, Erming Pei wrote:
Hi,
Is there a way to list the namespaces in cephfs? How to set it up?
From man page of ceph.mount, I see this:
To mount only part of the namespace:
mount.ceph monhost1:/some/small/thing /mnt/thing
But how to know the namespaces at fir
Hi,
I am just wondering which use case is better: (within one single file
system) set up one data pool for each project, or let project to share a
big pool?
Thanks,
Erming
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
ck unclean; recovery 58417161/113290060 objects misplaced (51.564%);
mds0: Client physics-007:Physics01_data failing to respond to cache pressure
/
Is it critical?
thanks,
Erming
--
-----
Erming Pei, Ph.D
Senior System Analyst; Grid/Cloud Specialist
l.
Thanks,
Erming
--
--------
Erming Pei, Ph.D, Senior System Analyst
HPC Grid/Cloud Specialist, ComputeCanada/WestGrid
Research Computing Group, IST
University of Alberta, Canada T6G 2H1
Email: erm...@ualberta.ca erming@cern.ch
Tel. : +1
(Found no response from the current list, so forwarded to
ceph-us...@ceph.com. )
Sorry if it's duplicated.
Original Message
Subject:scrub error with ceph
Date: Mon, 7 Dec 2015 14:15:07 -0700
From: Erming Pei
To: ceph-users@lists.ceph.com
Hi,
I
10 matches
Mail list logo