On Fri, Jul 14, 2017 at 10:37 AM, Oscar Segarra <oscar.sega...@gmail.com>
wrote:

> I'm testing on latest Jewell version I've found in repositories:
>
you can skip that command then, I will fix the document to add a note for
jewel or pre luminous build.


>
> [root@vdicnode01 yum.repos.d]# ceph --version
> ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e)
>
> thanks a lot!
>
> 2017-07-14 19:21 GMT+02:00 Vasu Kulkarni <vakul...@redhat.com>:
>
>> It is tested for master and is working fine, I will run those same tests
>> on luminous and check if there is an issue and update here. mgr create is
>> needed for luminous+ bulids only.
>>
>> On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown <rogerpbr...@gmail.com>
>> wrote:
>>
>>> I've been trying to work through similar mgr issues for
>>> Xenial-Luminous...
>>>
>>> roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>> /home/roger/.cephdeploy.conf
>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr
>>> create mon1 nuc2
>>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>>> [ceph_deploy.cli][INFO  ]  username                      : None
>>> [ceph_deploy.cli][INFO  ]  verbose                       : False
>>> [ceph_deploy.cli][INFO  ]  mgr                           : [('mon1',
>>> 'mon1'), ('nuc2', 'nuc2')]
>>> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
>>> [ceph_deploy.cli][INFO  ]  subcommand                    : create
>>> [ceph_deploy.cli][INFO  ]  quiet                         : False
>>> [ceph_deploy.cli][INFO  ]  cd_conf                       :
>>> <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f25b40fbc20>
>>> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
>>> [ceph_deploy.cli][INFO  ]  func                          : <function mgr
>>> at 0x7f25b4772668>
>>> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
>>> [ceph_deploy.cli][INFO  ]  default_release               : False
>>> [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts mon1:mon1
>>> nuc2:nuc2
>>> [mon1][DEBUG ] connection detected need for sudo
>>> [mon1][DEBUG ] connected to host: mon1
>>> [mon1][DEBUG ] detect platform information from remote host
>>> [mon1][DEBUG ] detect machine type
>>> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
>>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon1
>>> [mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>> [mon1][DEBUG ] create path if it doesn't exist
>>> [mon1][INFO  ] Running command: sudo ceph --cluster ceph --name
>>> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
>>> auth get-or-create mgr.mon1 mon allow profile mgr osd allow * mds allow *
>>> -o /var/lib/ceph/mgr/ceph-mon1/keyring
>>> [mon1][ERROR ] 2017-07-14 11:17:19.667418 7f309613f700  0 librados:
>>> client.bootstrap-mgr authentication error (22) Invalid argument
>>> [mon1][ERROR ] (22, 'error connecting to the cluster')
>>> [mon1][ERROR ] exit code from command was: 1
>>> [ceph_deploy.mgr][ERROR ] could not create mgr
>>> [nuc2][DEBUG ] connection detected need for sudo
>>> [nuc2][DEBUG ] connected to host: nuc2
>>> [nuc2][DEBUG ] detect platform information from remote host
>>> [nuc2][DEBUG ] detect machine type
>>> [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
>>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc2
>>> [nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>> [nuc2][DEBUG ] create path if it doesn't exist
>>> [nuc2][INFO  ] Running command: sudo ceph --cluster ceph --name
>>> client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
>>> auth get-or-create mgr.nuc2 mon allow profile mgr osd allow * mds allow *
>>> -o /var/lib/ceph/mgr/ceph-nuc2/keyring
>>> [nuc2][ERROR ] 2017-07-14 17:17:21.800166 7fe344f32700  0 librados:
>>> client.bootstrap-mgr authentication error (22) Invalid argument
>>> [nuc2][ERROR ] (22, 'error connecting to the cluster')
>>> [nuc2][ERROR ] exit code from command was: 1
>>> [ceph_deploy.mgr][ERROR ] could not create mgr
>>> [ceph_deploy][ERROR ] GenericError: Failed to create 2 MGRs
>>> roger@desktop:~/ceph-cluster$
>>>
>>>
>>>
>>> On Fri, Jul 14, 2017 at 11:01 AM Oscar Segarra <oscar.sega...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm following the instructions of the web (
>>>> http://docs.ceph.com/docs/master/start/quick-ceph-deploy/) and I'm
>>>> trying to create a manager on my first node.
>>>>
>>>> In my environment I have 2 nodes:
>>>>
>>>> - vdicnode01 (mon, mgr and osd)
>>>> - vdicnode02 (osd)
>>>>
>>>> Each server has to NIC, the public and the private where all ceph
>>>> trafic will go over.
>>>>
>>>> I have created .local entries in /etc/hosts:
>>>>
>>>> 192.168.100.101   vdicnode01.local
>>>> 192.168.100.102   vdicnode02.local
>>>>
>>>> Public names are resolved via DNS.
>>>>
>>>> When I try to create the mgr in a fresh install I get the following
>>>> error:
>>>>
>>>> [vdicceph@vdicnode01 ceph]$ ceph-deploy --username vdicceph mgr create
>>>> vdicnode01.local
>>>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>>>> /home/vdicceph/.cephdeploy.conf
>>>> [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /bin/ceph-deploy --username
>>>> vdicceph mgr create vdicnode01.local
>>>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>>>> [ceph_deploy.cli][INFO  ]  username                      : vdicceph
>>>> [ceph_deploy.cli][INFO  ]  verbose                       : False
>>>> [ceph_deploy.cli][INFO  ]  mgr                           :
>>>> [('vdicnode01.local', 'vdicnode01.local')]
>>>> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
>>>> [ceph_deploy.cli][INFO  ]  subcommand                    : create
>>>> [ceph_deploy.cli][INFO  ]  quiet                         : False
>>>> [ceph_deploy.cli][INFO  ]  cd_conf                       :
>>>> <ceph_deploy.conf.cephdeploy.Conf instance at 0x1985680>
>>>> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
>>>> [ceph_deploy.cli][INFO  ]  func                          : <function
>>>> mgr at 0x1916848>
>>>> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
>>>> [ceph_deploy.cli][INFO  ]  default_release               : False
>>>> [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts
>>>> vdicnode01.local:vdicnode01.local
>>>> [vdicnode01.local][DEBUG ] connection detected need for sudo
>>>> [vdicnode01.local][DEBUG ] connected to host: vdicceph@vdicnode01.local
>>>> [vdicnode01.local][DEBUG ] detect platform information from remote host
>>>> [vdicnode01.local][DEBUG ] detect machine type
>>>> [ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
>>>> [ceph_deploy.mgr][DEBUG ] remote host will use systemd
>>>> [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to vdicnode01.local
>>>> [vdicnode01.local][DEBUG ] write cluster configuration to
>>>> /etc/ceph/{cluster}.conf
>>>> [vdicnode01.local][DEBUG ] create path if it doesn't exist
>>>> [ceph_deploy.mgr][ERROR ] OSError: [Errno 2] No such file or directory:
>>>> '/var/lib/ceph/mgr/ceph-vdicnode01.local'
>>>> [ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs
>>>>
>>>> --> I tet the same error if I use the vdicnode01 hostname (without
>>>> .local).
>>>>
>>>> Any help will be welcome!
>>>>
>>>> Thanks a lot in advance
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to