[ceph-users] Fail to automount osd after reboot when the /var Partition is ext4 but success automount when /var Partition is xfs

2016-08-18 Thread Leo Yu
hi,cepher i have deploy a cluster jewel 10.2.2,and fail to automount osd after reboot when the /var Partition is ext4: [root@node1 ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT fd0 sda ├─sda1 ext4 497a4f82-3cbf-4e27-b026-cdd3c5ecc2dd /boot └─sda2

[ceph-users] Fail to automount osd after reboot when the /var Partition is ext4 but success automount when /var Partition is ext4

2016-08-18 Thread Leo Yu
hi,cepher i have deploy a cluster jewel 10.2.2,and fail to automount osd after reboot with the system Partition: [root@node1 ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT fd0 sda ├─sda1 ext4 497a4f82-3cbf-4e27-b026-cdd3c5ecc2dd /boot └─sda2 LVM2_

[ceph-users] is it possible to get and set zonegroup , zone through admin rest api?

2016-08-16 Thread Leo Yu
hi,cepher is it possible to set zonegroup and zone through admin rest api? i can get and set the zonegroup and zone through radosgw-admin command like the following : [root@ceph04 src]# ./radosgw-admin zone get --rgw-zone=us-east-2 #dump those to an file and inject after modify 2016-08-17 13:40

[ceph-users] ceph recreate the already exist bucket throw out error when have max_buckets num bucket

2016-08-10 Thread Leo Yu
hi,i create a user uid=testquato2 ,the user can create max_buckets num =10 buckets, [root@node1 ~]# radosgw-admin user info --uid=testquato2 { "user_id": "testquato2", "display_name": "testquato2", "email": "", "suspended": 0, "max_buckets": 10, "auid": 0, "subusers":

[ceph-users] [jewel][rgw]why the usage log record date is 16 hours later than the real operate time

2016-07-28 Thread Leo Yu
hi all, i want get the usage of user,so i use the command radosgw-admin usage show ,but i can not get the usage when i use the --start-date unless minus 16 hours i have rgw both on ceph01 and ceph03,civeweb:7480 port ,and the ceph version is jewel 10.2.2 the time zone of ceph01 and ceph03 [roo

[ceph-users] delete all pool,but the data is still exist.

2016-06-20 Thread Leo Yu
hi, i delete all pool by this script arr=( $(rados lspools) ) for key in "${!arr[@]}"; do ceph osd pool delete ${arr[$key]} ${arr[$key]} --yes-i-really-really-mean-it done the output of ceph df after delete all pool,it seems this is no pool any more,but still 251M usaged disk space. [root@cep