On Mon, Aug 26, 2013 at 10:45 AM, Johannes Klarenbeek 
<johannes.klarenb...@rigo.nl<mailto:johannes.klarenb...@rigo.nl>> wrote:
Hello ceph-users,

I'm trying to set up a linux cluster but it takes me a little longer then I 
hoped for. There are some things that I do not quite understand yet. Hopefully 
some of you can help me out.


1)      When using ceph-deploy, a ceph.conf file is created in the current 
directory and in the /etc/ceph directory. Which one is ceph-deploy using and 
which one should I edit?
The usual workflow is that ceph-deploy will use the one in the current dir to 
overwrite the remote one (or the one in /etc/ceph/) but will warn if this is 
the case and error out specifying it needs the `--overwrite-conf` flag
to continue.

Aha, so the current dir is leading. And if I make changes to that file, it is 
not overwritten by ceph-deploy?


2)      I have 6 OSD running per machine. I disk zapped them with ceph-deploy 
disk zap. I prepared/activated them with a separate journal on an ssd card and 
they are all running.


a)      Ceph-deploy disk list doesn't show me what file system is in use (or 
'Linux Filesystem' as it mentions, is a filesystem in its own right). Neither 
does it show you what partition or path it is using for its journal.
That ceph-deploy command calls `ceph-disk list` in the remote host which in 
turn will not (as far as I could see) tell you what exact file system is in use.

b)      Running parted doesn't show me what file system is in use either 
(except of course that it is a 'Linux Filesystem')... I believe parted should 
do the trick to show me these settings??
How are you calling parted? with what flags? Usually something like: `sudo 
parted /dev/{device} print` would print some output. Can you show what you are 
getting back?

I started parted and then used the print command... hmm but it now actually 
returns something else...my bad. This is what it returns however (and its using 
xfs)
root@cephnode1:/root@parted /dev/sdd print
Model: ATA WDC WD2000FYYZ-0 (scsi)
Disk /dev/sdd: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number              Start      End        Size       File system        Name    
               Flags
1                            1049kB 2000GB 2000GB xfs                         
ceph data

c)       When a GPT partition is corrupt (missing msdos prelude for example) 
ceph-deploy disk zap doesn't work. But after 4 times repeating the command, it 
works, showing up as 'Linux Filesystem'.
So it didn't work 4 times and then it did? This does sound unexpected.

It is. But I have to say, I was fooling around a little with dd to wipe the 
disk clean.

d)      How can you set the file system that ceph-deploy disk zap is formatting 
the ceph data disk with? I like to zap a disk with XFS for example.
This is currently not supported but could be added as a feature.

Seems like a important feature. How does ceph-deploy determine what file system 
the disk is zapped with?


3)      Is there a way to set the data partition for ceph-mon with ceph-deploy 
or should I do it manually in ceph.conf? How do I format that partition (what 
file system should I use)

 !This is however something I still need to do!

4)      When running ceph status the following message is what I get:
root@cephnode1:/root#ceph status
    cluster:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    health: HEALTH_WARN 37 pgs degraded; 192 pgs stuck unclean
   monmap e1: 1 mons at 
{cephnode1=172.16.1.2:6789/0<http://172.16.1.2:6789/0>}, election epoch 1, 
quorum 0 cephnode1
    osdmap e38: 6 osds: 6 up, 6 in
    pgmap v65: 192 pgs: 155 active+remapped, 37 active+degraded; 0 bytes data, 
213 MB used, 11172GB / 11172GB avail
   mdsmap e1: 0/0/1 up



a)      How do I get rid of the HEALTH_WARN message? Can I run some tool that 
initiates a repair?

b)      I did not put any data in it yet, but it already uses a whopping 213 
MB, why?


5)      Last but not least, my config file looks like this
root@cephnode1:/root#cat /etc/ceph/ceph.conf
[global]

fsid = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
mon_initial_members = cephnode1
mon_host = 172.16.1.2
auth_supported = cephx

osd_journal_size = 1024
filestore_xattr_use_omap = true



This is really strange, since the documentation states that the minimum 
requirement for a config with my configuration should at least have the [mon.1] 
and [osd.1] [osd.2] [osd.3] [osd.4] [osd.5] [osd.6] directives. I have set up 
separate journaling for my OSD's but it doesn't show in my conf file. Also the 
journaling partitions are 2GB big and not 1024MB (if that is what it means 
then).






I can really use your help, since I'm stuck for the moment.

Regards,
Johannes









__________ Informatie van ESET Endpoint Antivirus, versie van database 
viruskenmerken 8730 (20130826) __________

Het bericht is gecontroleerd door ESET Endpoint Antivirus.

http://www.eset.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



__________ Informatie van ESET Endpoint Antivirus, versie van database 
viruskenmerken 8730 (20130826) __________

Het bericht is gecontroleerd door ESET Endpoint Antivirus.

http://www.eset.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to