On Thu, Oct 31, 2013 at 4:01 PM, Trivedi, Narendra
<narendra.triv...@savvis.com> wrote:
> Okay I created mkdir'ed /etc/ceph on the admin node and issued the command 
> again and now it seems to go through:
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy admin 
> ceph-admin-node-centos-6-4 ceph-node1-mon-centos-6-4 
> ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4
> [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy admin 
> ceph-admin-node-centos-6-4 ceph-node1-mon-centos-6-4 
> ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4
> [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to 
> ceph-admin-node-centos-6-4
> [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo
> [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo
> [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to 
> ceph-node1-mon-centos-6-4
> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
> [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to 
> ceph-node2-osd0-centos-6-4
> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
> [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to 
> ceph-node3-osd1-centos-6-4
> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph
> -bash: ceph: command not found
>
> But now, I go to the MON (I have only one) node and issue ceph -w or ceph 
> health it throws errors:
>
> [ceph@ceph-node1-mon-centos-6-4 ceph]$ ceph -w
> 2013-10-31 14:52:58.601016 7fcd19b11700 -1 monclient(hunting): ERROR: missing 
> keyring, cannot use cephx for authentication
> 2013-10-31 14:52:58.601063 7fcd19b11700  0 librados: client.admin 
> initialization error (2) No such file or directory
> Error connecting to cluster: ObjectNotFound
> [ceph@ceph-node1-mon-centos-6-4 ceph]$ ceph health
> 2013-10-31 14:53:03.675425 7fe4937fc700 -1 monclient(hunting): ERROR: missing 
> keyring, cannot use cephx for authentication
> 2013-10-31 14:53:03.675467 7fe4937fc700  0 librados: client.admin 
> initialization error (2) No such file or directory
These messages are bit misleading but you need to be either root or to
call ceph with sudo.
> Error connecting to cluster: ObjectNotFound
>
> How do I know I am done installing ceph... finally? Shall I ignore this and 
> install a ceph client vm to test object and block??
>
> Thanks!
> Narendra
>
> -----Original Message-----
> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
> Sent: Thursday, October 31, 2013 11:04 AM
> To: Trivedi, Narendra
> Cc: Karan Singh; alistair whittle; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Activating OSDs takes forever
>
> On Thu, Oct 31, 2013 at 12:01 PM, Trivedi, Narendra 
> <narendra.triv...@savvis.com> wrote:
>> After activating OSDs, I tried to use ceph-deploy to copy the
>> configuration file and admin ket to my admin node and my ceph nodes so
>> that I can use ceph CLI without having to specify the monitor address
>> and ceph.client.admin.keyring each time I execute a command but I got
>> the following error:
>>
>>
>>
>> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy admin
>> ceph-admin-node-centos-6-4 ceph-node1-mon-centos-6-4
>> ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4
>>
>> [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy admin
>> ceph-admin-node-centos-6-4 ceph-node1-mon-centos-6-4
>> ceph-node2-osd0-centos-6-4 ceph-node3-osd1-centos-6-4
>>
>> [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to
>> ceph-admin-node-centos-6-4
>>
>> [ceph_deploy.sudo_pushy][DEBUG ] will use a local connection with sudo
>>
>> Traceback (most recent call last):
>>
>>   File "/usr/bin/ceph-deploy", line 21, in <module>
>>
>>     sys.exit(main())
>>
>>   File
>> "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py",
>> line 83, in newfunc
>>
>>     return f(*a, **kw)
>>
>>   File "/usr/lib/python2.6/site-packages/ceph_deploy/cli.py", line
>> 150, in main
>>
>>     return args.func(args)
>>
>>   File "/usr/lib/python2.6/site-packages/ceph_deploy/admin.py", line
>> 40, in admin
>>
>>     overwrite=args.overwrite_conf,
>>
>>   File "/usr/lib/python2.6/site-packages/pushy/protocol/proxy.py",
>> line 255, in <lambda>
>>
>>     (conn.operator(type_, self, args, kwargs))
>>
>>   File
>> "/usr/lib/python2.6/site-packages/pushy/protocol/connection.py", line
>> 66, in operator
>>
>>     return self.send_request(type_, (object, args, kwargs))
>>
>>   File
>> "/usr/lib/python2.6/site-packages/pushy/protocol/baseconnection.py",
>> line 329, in send_request
>>
>>     return self.__handle(m)
>>
>>   File
>> "/usr/lib/python2.6/site-packages/pushy/protocol/baseconnection.py",
>> line 645, in __handle
>>
>>     raise e
>>
>> pushy.protocol.proxy.ExceptionProxy: [Errno 2] No such file or directory:
>> '/etc/ceph/ceph.conf.16463.tmp'
>>
>>
>>
>> Has anyone seen anything like this?
>
> Yes, this should be fixed in the upcoming ceph-deploy version. It happens 
> when you attempt to run ceph-deploy on a node that doesn't have ceph 
> installed. This happens because the `/etc/ceph` directory doesn't exist, 
> which is created when ceph is installed.
>
> If ceph was installed and this is still happening is because purge or 
> purgedata removed that directory. The current workaround is to `mkdir 
> /etc/ceph` if ceph is in fact installed.
>>
>>
>>
>>
>>
>>
>>
>> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>> Sent: Thursday, October 31, 2013 8:19 AM
>> To: Karan Singh
>> Cc: alistair whittle; Trivedi, Narendra; ceph-users@lists.ceph.com
>>
>>
>> Subject: Re: [ceph-users] Activating OSDs takes forever
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Oct 31, 2013 at 8:07 AM, Karan Singh <ksi...@csc.fi> wrote:
>>
>> You are right AW , i also faced this very same error message apparently
>> everything just works fine till now   no issues reported yet.
>>
>>
>>
>> Regards
>>
>> Karan Singh
>>
>>
>>
>> ________________________________
>>
>> From: "alistair whittle" <alistair.whit...@barclays.com>
>> To: "Narendra Trivedi" <narendra.triv...@savvis.com>,
>> ceph-users@lists.ceph.com
>> Sent: Thursday, 31 October, 2013 12:06:52 PM
>> Subject: Re: [ceph-users] Activating OSDs takes forever
>>
>>
>>
>>
>>
>> I got these exact same error messages on my RHEL build.   It did however
>> seem that my OSD's were correctly built and brought online.   Never got to
>> the bottom of these errors though.
>>
>>
>>
>> There were others on this forum that had similar issues, and theirs
>> too seemed to be working successfully despite the apparent problems
>> during activation.
>>
>>
>>
>> From: ceph-users-boun...@lists.ceph.com
>> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Trivedi,
>> Narendra
>> Sent: Wednesday, October 30, 2013 8:58 PM
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] Activating OSDs takes forever
>>
>>
>>
>> Hi All,
>>
>>
>>
>> I had a pretty good run until I issued a command to activate OSDs. Now
>> I am back with some more problemsL. My setup is exactly like the one
>> in the official ceph documentation:
>>
>>
>>
>> http://ceph.com/docs/master/start/quick-ceph-deploy/
>>
>>
>>
>> That means, I am just using node2:/tmp/osd0 and node3:/tmp/osd1 as
>> OSDs. I am running 64-bit CentOS-6-4.
>>
>>
>>
>> I was able to prepare OSDs fine but when I am activating them it just
>> takes forver (15 mins and counting):
>>
>>
>>
>> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy osd activate
>> ceph-node2-osd0-centos-6-4:/tmp/osd0
>> ceph-node3-osd1-centos-6-4:/tmp/osd1
>>
>> [ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd
>> activate
>> ceph-node2-osd0-centos-6-4:/tmp/osd0
>> ceph-node3-osd1-centos-6-4:/tmp/osd1
>>
>> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
>> ceph-node2-osd0-centos-6-4:/tmp/osd0: ceph-node3-osd1-centos-6-4:/tmp/osd1:
>>
>> [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with
>> sudo
>>
>> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
>>
>> [ceph_deploy.osd][DEBUG ] activating host ceph-node2-osd0-centos-6-4
>> disk
>> /tmp/osd0
>>
>> [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
>>
>> [ceph-node2-osd0-centos-6-4][INFO  ] Running command:
>> ceph-disk-activate --mark-init sysvinit --mount /tmp/osd0
>>
>> [ceph-node2-osd0-centos-6-4][INFO  ] === osd.0 ===
>>
>> [ceph-node2-osd0-centos-6-4][INFO  ] Starting Ceph osd.0 on
>> ceph-node2-osd0-centos-6-4...
>>
>> [ceph-node2-osd0-centos-6-4][INFO  ] starting osd.0 at :/0 osd_data
>> /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] got latest monmap
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] 2013-10-30 15:34:54.059954
>> 7faffbd537a0
>> -1 journal FileJournal::_open: disabling aio for non-block journal.
>> Use journal_force_aio to force use of aio anyway
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] 2013-10-30 15:34:54.121705
>> 7faffbd537a0
>> -1 journal FileJournal::_open: disabling aio for non-block journal.
>> Use journal_force_aio to force use of aio anyway
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] 2013-10-30 15:34:54.122260
>> 7faffbd537a0
>> -1 filestore(/tmp/osd0) could not find 23c2fcde/osd_superblock/0//-1
>> in
>> index: (2) No such file or directory
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] 2013-10-30 15:34:54.204635
>> 7faffbd537a0
>> -1 created object store /tmp/osd0 journal /tmp/osd0/journal for osd.0
>> fsid ec8c48d5-3889-433b-bf68-558f2eb39a8c
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] 2013-10-30 15:34:54.204714
>> 7faffbd537a0
>> -1 auth: error reading file: /tmp/osd0/keyring: can't open
>> /tmp/osd0/keyring: (2) No such file or directory
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] 2013-10-30 15:34:54.204938
>> 7faffbd537a0
>> -1 created new key in keyring /tmp/osd0/keyring
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] added key for osd.0
>>
>> [ceph-node2-osd0-centos-6-4][ERROR ] create-or-move updating item name
>> 'osd.0' weight 0 at location
>> {host=ceph-node2-osd0-centos-6-4,root=default}
>> to crush map
>>
>>
>>
>> Why is ceph-deploy throwing bunch of errors when all I did was to
>> create directories under /tmp on both nodes and issue a command to
>> prepare them without any problems. I mean I can always delete the
>> /tmp/osd0-1 and re-issue the commands again but I am wondering if
>> anyone else has been this before?
>>
>>
>>
>>
>>
>> The logging here is misleading because what is happening is that
>> ceph-deploy interprets "remote stderr output" as ERROR logging and
>> "remote stdout output" as DEBUG. So if the remote host is sending
>> output to `stderr` it will show as ERROR in ceph-deploy's logging.
>>
>> Unfortunately there are tools (and libraries!) that will default to
>> push all logging to `stderr`. There is little ceph-deploy can do here
>> because it can't interpret if the output is _really_ an error or is it
>> just logging something that is not really an error but somehow it is
>> going to `stderr`.
>>
>> I will make sure to add a not in the ceph-deploy docs about this so it
>> is a bit more clear.
>>
>>
>>
>> Thanks a lot in advance!
>>
>> Narendra
>>
>>
>>
>>
>>
>>
>> This message contains information which may be confidential and/or
>> privileged. Unless you are the intended recipient (or authorized to
>> receive for the intended recipient), you may not read, use, copy or
>> disclose to anyone the message or any information contained in the
>> message. If you have received the message in error, please advise the
>> sender by reply e-mail and delete the message and any attachment(s)
>> thereto without retaining any copies.
>>
>> _______________________________________________
>>
>> This message is for information purposes only, it is not a
>> recommendation, advice, offer or solicitation to buy or sell a product
>> or service nor an official confirmation of any transaction. It is
>> directed at persons who are professionals and is not intended for
>> retail customer use. Intended for recipient only. This message is subject to 
>> the terms at:
>> www.barclays.com/emaildisclaimer.
>>
>> For important disclosures, please see:
>> www.barclays.com/salesandtradingdisclaimer regarding market commentary
>> from Barclays Sales and/or Trading, who are active market
>> participants; and in respect of Barclays Research, including
>> disclosures relating to specific issuers, please see 
>> http://publicresearch.barclays.com.
>>
>> _______________________________________________
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>> This message contains information which may be confidential and/or
>> privileged. Unless you are the intended recipient (or authorized to
>> receive for the intended recipient), you may not read, use, copy or
>> disclose to anyone the message or any information contained in the
>> message. If you have received the message in error, please advise the
>> sender by reply e-mail and delete the message and any attachment(s)
>> thereto without retaining any copies.
>
> This message contains information which may be confidential and/or 
> privileged. Unless you are the intended recipient (or authorized to receive 
> for the intended recipient), you may not read, use, copy or disclose to 
> anyone the message or any information contained in the message. If you have 
> received the message in error, please advise the sender by reply e-mail and 
> delete the message and any attachment(s) thereto without retaining any copies.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to