[ceph-users] admin_socket: exception getting command descriptions

2017-02-11 Thread Vince

Hi,

We are getting the below error while trying to create monitor from the 
admin node.


ceph-deploy mon create-initial


[osd-ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon 
/var/run/ceph/ceph-mon.osd-ceph1.asok mon_status
[osd-ceph1][ERROR ] admin_socket: exception getting command 
descriptions: [Errno 2] No such file or directory



I couldn't see /var/run/ceph/ceph-mon.osd-ceph1.aso in the monitor server.

See the full output
==
[osd-ceph1][DEBUG ] connected to host: osd-ceph1
[osd-ceph1][DEBUG ] detect platform information from remote host
[osd-ceph1][DEBUG ] detect machine type
[osd-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.3.1611 Core
[osd-ceph1][DEBUG ] determining if provided host has same hostname in remote
[osd-ceph1][DEBUG ] get remote short hostname
[osd-ceph1][DEBUG ] deploying mon to osd-ceph1
[osd-ceph1][DEBUG ] get remote short hostname
[osd-ceph1][DEBUG ] remote hostname: osd-ceph1
[osd-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[osd-ceph1][DEBUG ] create the mon path if it does not exist
[osd-ceph1][DEBUG ] checking for done path: 
/var/lib/ceph/mon/ceph-osd-ceph1/done

[osd-ceph1][DEBUG ] create a done file to avoid re-doing the mon deployment
[osd-ceph1][DEBUG ] create the init path if it does not exist
[osd-ceph1][INFO  ] Running command: systemctl enable ceph.target
[osd-ceph1][INFO  ] Running command: systemctl enable ceph-mon@osd-ceph1
[osd-ceph1][INFO  ] Running command: systemctl start ceph-mon@osd-ceph1
[osd-ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon 
/var/run/ceph/ceph-mon.osd-ceph1.asok mon_status
[osd-ceph1][ERROR ] admin_socket: exception getting command 
descriptions: [Errno 2] No such file or directory

[osd-ceph1][WARNIN] monitor: mon.osd-ceph1, might not be running yet
[osd-ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon 
/var/run/ceph/ceph-mon.osd-ceph1.asok mon_status
[osd-ceph1][ERROR ] admin_socket: exception getting command 
descriptions: [Errno 2] No such file or directory

[osd-ceph1][WARNIN] monitor osd-ceph1 does not exist in monmap
[ceph_deploy.mon][INFO  ] processing monitor mon.osd-ceph1
root@osd-ceph1's password:
root@osd-ceph1's password:
[osd-ceph1][DEBUG ] connected to host: osd-ceph1
[osd-ceph1][DEBUG ] detect platform information from remote host
[osd-ceph1][DEBUG ] detect machine type
[osd-ceph1][DEBUG ] find the location of an executable
[osd-ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon 
/var/run/ceph/ceph-mon.osd-ceph1.asok mon_status
[osd-ceph1][ERROR ] admin_socket: exception getting command 
descriptions: [Errno 2] No such file or directory

==

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] admin_socket: exception getting command descriptions

2017-02-14 Thread Vince

Hi Liuchang0812,

Thank you for replying the thread.

I have corrected this issue. This was due to incorrect ownership of the 
/var/lib/ceph. It was under root and I have changed it to ceph ownership 
to resolve this.


However, I am seeing a new error while preparing the osd's. Any idea 
about this?


===

[osd-ceph2][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd 
stat --format=json


[osd-ceph2][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph_deploy.osd][DEBUG ] Host osd-ceph2 is now ready for osd use.

===

Full log

===

[root@admin-ceph mycluster]# ceph-deploy osd prepare 
osd-ceph2:/var/local/osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy osd 
prepare osd-ceph2:/var/local/osd0

[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  disk  : 
[('osd-ceph2', '/var/local/osd0', None)]

[ceph_deploy.cli][INFO  ]  dmcrypt   : False
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  bluestore : None
[ceph_deploy.cli][INFO  ]  overwrite_conf: False
[ceph_deploy.cli][INFO  ]  subcommand: prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir   : 
/etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   : 


[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  fs_type   : xfs
[ceph_deploy.cli][INFO  ]  func  : at 0x25d2b90>

[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  zap_disk  : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks 
osd-ceph2:/var/local/osd0:

[osd-ceph2][DEBUG ] connection detected need for sudo
[osd-ceph2][DEBUG ] connected to host: osd-ceph2
[osd-ceph2][DEBUG ] detect platform information from remote host
[osd-ceph2][DEBUG ] detect machine type
[osd-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to osd-ceph2
[osd-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host osd-ceph2 disk /var/local/osd0 
journal None activate False

[osd-ceph2][DEBUG ] find the location of an executable
[osd-ceph2][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v prepare 
--cluster ceph --fs-type xfs -- /var/local/osd0
[osd-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[osd-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[osd-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[osd-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[osd-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[osd-ceph2][WARNIN] populate_data_path: Preparing osd data dir 
/var/local/osd0
[osd-ceph2][WARNIN] command: Running command: /sbin/restorecon -R 
/var/local/osd0/ceph_fsid.16899.tmp
[osd-ceph2][WARNIN] command: Running command: /usr/bin/chown -R 
ceph:ceph /var/local/osd0/ceph_fsid.16899.tmp
[osd-ceph2][WARNIN] command: Running command: /sbin/restorecon -R 
/var/local/osd0/fsid.16899.tmp
[osd-ceph2][WARNIN] command: Running command: /usr/bin/chown -R 
ceph:ceph /var/local/osd0/fsid.16899.tmp
[osd-ceph2][WARNIN] command: Running command: /sbin/restorecon -R 
/var/local/osd0/magic.16899.tmp
[osd-ceph2][WARNIN] command: Running command: /usr/bin/chown -R 
ceph:ceph /var/local/osd0/magic.16899.tmp

[osd-ceph2][INFO  ] checking OSD status...
[osd-ceph2][DEBUG ] find the location of an executable
[osd-ceph2][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd 
stat --format=json


[osd-ceph2][WARNIN] No data was received after 300 seconds, disconnecting...
[ceph_deploy.osd][DEBUG ] Host osd-ceph2 is now ready for osd use.
===



On 02/12/2017 10:23 AM, liuchang0812 wrote:

Hi, Vince

What's your Ceph's version ?

` ceph --version`

2017-02-11 19:10 GMT+08:00 Vince :

Hi,

We are getting the below error while trying to create monitor from the admin
node.

ceph-deploy mon create-initial


[osd-ceph1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon
/var/run/ceph/ceph-mon.osd-ceph1.asok mon_status
[osd-ceph1][ERROR ] admin_socket: exception getting command descriptions:
[Errno 2] No such file or directory


I couldn't see /var/run/ceph/ceph-mon.osd-cep

[ceph-users] CloudRuntimeException: Failed to create storage pool

2017-02-20 Thread Vince

Hello,

I have created a ceph cluster with one admin server, one monitor and two 
osd's. The setup is completed. But when trying to add the ceph as 
primary storage of cloudstack, I am getting the below error in error logs.


Am I missing something ? Please help.



2017-02-20 21:03:02,842 DEBUG 
[o.a.c.s.d.l.CloudStackPrimaryDataStoreLifeCycleImpl] 
(catalina-exec-6:ctx-f293a10c ctx-093b4faf) *In createPool Adding the 
pool to each of the hosts*
2017-02-20 21:03:02,843 DEBUG [c.c.s.StorageManagerImpl] 
(catalina-exec-6:ctx-f293a10c ctx-093b4faf) Adding pool null to host 1
2017-02-20 21:03:02,845 DEBUG [c.c.a.t.Request] 
(catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294: 
Sending  { Cmd , MgmtId: 207381009036, via: 1(hyperkvm.x.com), Ver: 
v1, Flags: 100011, 
[{"com.cloud.agent.api.ModifyStoragePoolCommand":{"add":true,"pool":{"id":14,"uuid":"9c51d737-3a6f-3bb3-8f28-109954fc2ef0","host":"mon1..com","path":"cloudstack","userInfo":"cloudstack:AQDagqZYgSSpOBAATFvSt4tz3cOUWhNtR-NaoQ==","port":6789,"type":"RBD"},"localPath":"/mnt//ac5436a6-5889-30eb-b079-ac1e05a30526","wait":0}}] 
}
2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request] 
(AgentManager-Handler-15:null) Seq 1-653584895922143294: Processing:  { 
Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 10, 
[{"com.cloud.agent.api.Answer":{"result":false,"details":"com.cloud.utils.exception.*CloudRuntimeException: 
Failed to create storage pool: 
9c51d737-3a6f-3bb3-8f28-109954fc2ef0\n\tat 
*com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:524)\n\tat 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:277)\n\tat 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:271)\n\tat 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2823)\n\tat 
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1325)\n\tat 
com.cloud.agent.Agent.processRequest(Agent.java:501)\n\tat 
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:808)\n\tat 
com.cloud.utils.nio.Task.run(Task.java:84)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request] 
(catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294: 
Received:  { Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 10, { 
Answer } }
2017-02-20 21:03:02,944 DEBUG [c.c.a.m.AgentManagerImpl] 
(catalina-exec-6:ctx-f293a10c ctx-093b4faf) Details from executing class 
com.cloud.agent.api.ModifyStoragePoolCommand: 
com.cloud.utils.exception.*CloudRuntimeException: Failed to create 
storage pool: 9c51d737-3a6f-3bb3-8f28-109954fc2ef0*
at 
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:524)
at 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:277)
at 
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:271)




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] INFO:ceph-create-keys:ceph-mon admin socket not ready yet.

2017-03-21 Thread Vince

Hi,

I am getting the below error in messages after setting up ceph monitor.

===
Mar 21 08:48:23 mon1 ceph-create-keys: admin_socket: exception getting 
command descriptions: [Errno 2] No such file or directory
Mar 21 08:48:23 mon1 ceph-create-keys: INFO:ceph-create-keys:ceph-mon 
admin socket not ready yet.
Mar 21 08:48:23 mon1 ceph-create-keys: admin_socket: exception getting 
command descriptions: [Errno 2] No such file or directory
Mar 21 08:48:23 mon1 ceph-create-keys: INFO:ceph-create-keys:ceph-mon 
admin socket not ready yet.

===

On checking the ceph-create-keys service status, getting the below error.

===
[root@mon1 ~]# systemctl status ceph-create-keys@mon1.service 

● ceph-create-keys@mon1.service  - 
Ceph cluster key creator task
Loaded: loaded (/usr/lib/systemd/system/ceph-create-keys@.service; 
static; vendor preset: disabled)
Active: inactive (dead) since Thu 2017-02-16 10:47:14 PST; 1 months 2 
days ago

Condition: start condition failed at Tue 2017-03-21 05:47:42 PDT; 2s ago
ConditionPathExists=!/var/lib/ceph/bootstrap-mds/ceph.keyring was not met
Main PID: 2576 (code=exited, status=0/SUCCESS)
===

Have anyone faced this error before ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] INFO:ceph-create-keys:ceph-mon admin socket not ready yet.

2017-03-21 Thread Vince

Hi,

I have checked and confirmed that the monitor daemon is running and the 
socket file /var/run/ceph/ceph-mon.mon1.asok has been created. But the 
server messages is still showing the error.



Mar 22 00:47:38 mon1 ceph-create-keys: admin_socket: exception getting 
command descriptions: [Errno 2] No such file or directory
Mar 22 00:47:38 mon1 ceph-create-keys: *INFO:ceph-create-keys:ceph-mon 
admin socket not ready yet*.




[root@mon1 ~]# ll /var/run/ceph/ceph-mon.mon1.asok
srwxr-xr-x. 1 ceph ceph 0 Mar 21 04:13*/var/run/ceph/ceph-mon.mon1.asok*


=
[root@mon1 ~]# systemctl status ceph-mon@mon1.service
● ceph-mon@mon1.service - Ceph cluster monitor daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; 
vendor preset: disabled)

*Active: active (running*) since Tue 2017-03-21 04:13:20 PDT; 17h ago
 Main PID: 29746 (ceph-mon)
   CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@mon1.service
   └─29746 /usr/bin/ceph-mon -f --cluster ceph --id mon1 
--setuser ceph --setgroup ceph


Mar 21 04:13:20 mon1.ihnetworks.com systemd[1]: Started Ceph cluster 
monitor daemon.
Mar 21 04:13:20 mon1.ihnetworks.com systemd[1]: Starting Ceph cluster 
monitor daemon...
Mar 21 04:13:20 mon1.ihnetworks.com ceph-mon[29746]: starting mon.mon1 
rank 0 at 10.10.48.7:6789/0 mon_data /var/lib/ceph/mon/ceph-mon1 fsid 
ebac75fc-e631...cbcdd1d25
Mar 21 04:21:23 mon1.ihnetworks.com systemd[1]: 
[/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' 
in section 'Service'
Mar 21 04:35:47 mon1.ihnetworks.com systemd[1]: 
[/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' 
in section 'Service'
Mar 21 04:40:25 mon1.ihnetworks.com systemd[1]: 
[/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' 
in section 'Service'
Mar 21 04:43:01 mon1.ihnetworks.com systemd[1]: 
[/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' 
in section 'Service'
Mar 21 05:39:56 mon1.ihnetworks.com systemd[1]: 
[/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' 
in section 'Service'

Hint: Some lines were ellipsized, use -l to show in full.
=


On 03/21/2017 11:34 PM, Wes Dillingham wrote:
Generally this means the monitor daemon is not running. Is the monitor 
daemon running? The monitor daemon creates the admin socket in 
/var/run/ceph/$socket


Elaborate on how you are attempting to deploy ceph.

On Tue, Mar 21, 2017 at 9:01 AM, Vince <mailto:vi...@ihnetworks.com>> wrote:


Hi,

I am getting the below error in messages after setting up ceph
monitor.

===
Mar 21 08:48:23 mon1 ceph-create-keys: admin_socket: exception
getting command descriptions: [Errno 2] No such file or directory
Mar 21 08:48:23 mon1 ceph-create-keys:
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
Mar 21 08:48:23 mon1 ceph-create-keys: admin_socket: exception
getting command descriptions: [Errno 2] No such file or directory
Mar 21 08:48:23 mon1 ceph-create-keys:
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
===

On checking the ceph-create-keys service status, getting the below
error.

===
[root@mon1 ~]# systemctl status ceph-create-keys@mon1.service
<mailto:ceph-create-keys@mon1.service>
● ceph-create-keys@mon1.service
<mailto:ceph-create-keys@mon1.service> - Ceph cluster key creator task
Loaded: loaded (/usr/lib/systemd/system/ceph-create-keys@.service;
static; vendor preset: disabled)
Active: inactive (dead) since Thu 2017-02-16 10:47:14 PST; 1
months 2 days ago
Condition: start condition failed at Tue 2017-03-21 05:47:42 PDT;
2s ago
ConditionPathExists=!/var/lib/ceph/bootstrap-mds/ceph.keyring was
not met
Main PID: 2576 (code=exited, status=0/SUCCESS)
===

Have anyone faced this error before ?

___
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>




--
Respectfully,

Wes Dillingham
wes_dilling...@harvard.edu <mailto:wes_dilling...@harvard.edu>
Research Computing | Infrastructure Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] failed to connect to the RADOS monitor on: IP:6789, : Connection timed out

2017-03-21 Thread Vince

Hi,

We have setup a ceph cluster and while adding it as primary storage in 
Cloudstack, I am getting the below error in hypervisor server . The 
error says the hypervisor server timed out while connecting to the ceph 
monitor.


Disabled firewall and made sure ports are open. This is the final step. 
Please help.


==

2017-03-22 02:26:48,842 INFO [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) Didn't find an existing storage pool 
2a8446a8-cd7b-33a8-b5c7-7cda92f0 by UUID, checking for pools with 
duplicate paths
2017-03-22 02:27:18,999 ERROR [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) Failed to create RBD storage pool: 
org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 
123.345.56.7:6789,: Connection timed out
2017-03-22 02:27:18,999 ERROR [kvm.storage.LibvirtStorageAdaptor] 
(agentRequest-Handler-2:null) Failed to create the RBD storage pool, 
cleaning up the libvirt secret


==


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com