ol=cephfs_metadata,allow rwx pool=cephfs_data'
Thanks!
*Nate Curry*
On Tue, Apr 12, 2016 at 3:56 PM, Gregory Farnum wrote:
> On Tue, Apr 12, 2016 at 12:20 PM, Nate Curry wrote:
> > I am seeing an issue with cephfs where I am unable to write changes to
> the
> > f
emount
the filesystem without any issues. It also reboots and mounts no problem.
I am not sure what this could be caused by. Any ideas?
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
You are correct sir. I modified the user capabilities by adding the mds
cap with the 'allow r' permission using the following command.
*ceph auth caps client.cephfs mon 'allow r' mds 'allow r' osd 'allow rwx
pool=cephfs_metadata,allow rwx pool=cephfs_data'
it has permissions to the pools:
*client.cephfskey: #caps: [mon] allow
rcaps: [osd] allow rwx pool=datastore_metadata,allow rwx
pool=datastore_data*
Could someone tell me what else I would need to give the user permission to
in order to be able to mount
That was it. I had recently rebuilt the OSD hosts and completely forgot to
configure the firewall.
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
0
26 1.81799 osd.26 down0 1.0
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Yes that was what I meant. Thanks. Was that in a production environment?
Nate Curry
On Jul 10, 2015 11:21 AM, "Quentin Hartman"
wrote:
> You mean the hardware config? They are older Core2-based servers with 4GB
> of RAM. Nothing special. I have one running mon and rgw, one r
What was your monitor node's configuration when you had multiple ceph
daemons running on them?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 9, 2015 at 5:36 PM, Quentin Hartman <
qhart...@direwolfdigital.com>
supposed to straddle both the ceph only network and the storage network or
just in the ceph network?
Another question is can I run multiple things on the monitor nodes? Like
the RADOS GW and the MDS?
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph
Are you using the 4TB disks for the journal?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 2, 2015 at 12:16 PM, Shane Gibson
wrote:
>
> I'd def be happy to share what numbers I can get out of it. I'm st
y.
Would I need 64 GB of memory per monitor? I don't think that would scale
well at some point so I am thinking that is not correct. Can I get some
clarification?
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
4TB is too much to lose? Why would it matter if you lost one 4TB with the
redundancy? Won't it auto recover from the disk failure?
Nate Curry
On Jul 1, 2015 6:12 PM, "German Anders" wrote:
> I would probably go with less size osd disks, 4TB is to much to loss in
> case
well as 2 hot spares for the
6TB drives and 2 drives for the OS. I was thinking of 400GB SSD drives but
am wondering if that is too much. Any informed opinions would be
appreciated.
Thanks,
*Nate Curry*
___
ceph-users mailing list
ceph-users@lists.cep
13 matches
Mail list logo