Thank you for the respond.

well, i have two VMs (client - server) and i went through the whole 5 min,
the RBD t and CephFS, quick start commands. at the configuration file i
remain at the default value but i put a default value for pg and pgp = 200,
you can see it at the attached file.

i made the steps shown in the link:
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location

and when trying to identify the object location i didnt get the result as
expected instead i got the following message:
*Pool obj_name does not exist.*

Am i missing something here?

Best regards,
Thank you


On Tue, Mar 26, 2013 at 11:57 AM, Sebastien Han
<sebastien....@enovance.com>wrote:

> Hi,
>
> Could you post all the steps you made here? It would be easier to help you.
> No need to shout.
>
> Cheers.
> ––––
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
>
>
>
>
>
>
>
>
> PHONE : +33 (0)1 49 70 99 72 – MOBILE : +33 (0)6 52 84 44 70
> EMAIL : sebastien....@enovance.com – SKYPE : han.sbastien
> ADDRESS : 10, rue de la Victoire – 75009 Paris
> WEB : www.enovance.com – TWITTER : @enovance
>
> On Mar 26, 2013, at 8:23 AM, Waed Bataineh <promiselad...@gmail.com>
> wrote:
>
> Hi,
>
> I tried to locate an object to test the Ceph, and after trying to do the
> map the following msg appear : Pool obj_name does not exist
> what went wrong !! and exactly where the files as objects get stored in
> server (path to the objects) !!!
>
> Thank you.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>

<<image.png>>

[global]

        # For version 0.55 and beyond, you must explicitly enable
        # or disable authentication with "auth" entries in [global].

        auth cluster required = none
        auth service required = none
        auth client required = none

        # Ensure you have a realistic number of placement groups. We recommend
        # approximately 100 per OSD. E.g., total number of OSDs multiplied by 
100
        # divided by the number of replicas (i.e., osd pool default size). So 
for
        # 2 OSDs and osd pool default size = 1, we'd recommend approximately
        # (100 * 2) / 1 = 200.

        osd pool default size = 1  # Write an object 3 times.
        osd pool default min size = 1 # Allow writing one copy in a degraded 
state.

        osd pool default pg num = 200
        osd pool default pgp num = 200

[osd]
        osd journal size = 1000

        #The following assumes ext4 filesystem.
        filestore xattr use omap = true


        # For Bobtail (v 0.56) and subsequent versions, you may
        # add settings for mkcephfs so that it will create and mount
        # the file system on a particular OSD for you. Remove the comment `#`
        # character for the following settings and replace the values
        # in braces with appropriate values, or leave the following settings
        # commented out to accept the default values. You must specify the
        # --mkfs option with mkcephfs in order for the deployment script to
        # utilize the following settings, and you must define the 'devs'
        # option for each osd instance; see below.

        #osd mkfs type = {fs-type}
        #osd mkfs options {fs-type} = {mkfs options}   # default for xfs is "-f"
        #osd mount options {fs-type} = {mount options} # default mount option 
is "rw,noatime"

        # For example, for ext4, the mount option might look like this:

        #osd mkfs options ext4 = user_xattr,rw,noatime

        # Execute $ hostname to retrieve the name of your host,
        # and replace {hostname} with the name of your host.
        # For the monitor, replace {ip-address} with the IP
        # address of your host.

[mon.a]

        host = justcbuser-virtual-machine
        mon addr = 10.242.20.249:6789

[osd.0]
        host = justcbuser-virtual-machine

        # For Bobtail (v 0.56) and subsequent versions, you may
        # add settings for mkcephfs so that it will create and mount
        # the file system on a particular OSD for you. Remove the comment `#`
        # character for the following setting for each OSD and specify
        # a path to the device if you use mkcephfs with the --mkfs option.

        #devs = {path-to-device}

[osd.1]
        host = justcbuser-virtual-machine
        #devs = {path-to-device}

[mds.a]
        host = justcbuser-virtual-machine
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to