Hi Ceph Users,
Could anyone reply to my below questions? It would be great help. I appreciate.
--Rakesh Parkiti
From: ceph-users on behalf of Rakesh
Parkiti
Sent: 03 December 2016 13:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] RBD Image Features
block_name_prefix: rbd_data.105f238e1f29
format: 2
features: layering
flags:
user@tom1:~$ sudo rbd map --image rbd/img2
/dev/rbd0
user@tom1:~$ rbd showmapped
id pool image snap device
0 rbd img2 -/dev/rbd0
can someone help in replying these questions. It would be
ush rule set not with default crush rule on AWS environment."
Whereas the same procedure above at my local setup. with same CEPH Jewel
Ver.10.2.2 on RHEL 7.2 (maipo)able to create rbd image without any issues on
specific crush rule set.
What could be the issues? please any rea
eAmon
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
sdsuser@siteAmon:~/cluster$ ceph osd tree
ID W
Hi All,
Does CEPH support auto tiering?
ThanksRakesh Parkiti ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Ishmael
Once try to create image with image-feature as layering only.
#rbd create --image pool-name/image-name --size 15G --mage-feature layering
# rbd map --image pool-name/image-name
Thanks
Rakesh Parkiti
On Jun 23, 2016 19:46, Ishmael Tsoaela wrote:Hi All,I have created an image but
$ sudo
rbd map --image PoolA/PoolA_image1 --name client.rbd/dev/rbd0
-- Rakesh Parkiti
-- Forwarded message --
From: Ishmael Tsoaela
Date: Fri, Jun 17, 2016 at 5:31 PM
Subject: [ceph-users] image map failed
To: ceph-users@lists.ceph.com
Hi,Will someone please assist, I am new to cepph
up, 19 in
flags sortbitwise
pgmap v25719: 1286 pgs, 5 pools, 92160 kB data, 9 objects
3998 MB used, 4704 GB / 4708 GB avail
1286 active+clean
Can anyone please help with solution for above issue.
Thanks
Rake
Hello,
Unable to mount the CephFS file system from client node with "mount error 5 =
Input/output error"
MDS was installed on a separate node. Ceph Cluster health is OK and mds
services are running. firewall was disabled across all the nodes in a cluster.
-- Ceph Cluster Nodes (RHEL 7.2 vers