can you paste dmesg and system logs? I am using 3 node OCFS2 with RBD and had no problems.

On 15-10-23 08:40, gjprabu wrote:
Hi Frederic,

Can you give me some solution, we are spending more time to solve this issue.

Regards
Prabu




---- On Thu, 15 Oct 2015 17:14:13 +0530 *Tyler Bishop <tyler.bis...@beyondhosting.net>* wrote ----

    I don't know enough on ocfs to help.  Sounds like you have
    unconccurent writes though

    Sent from TypeMail <http://www.typeapp.com/r>
    On Oct 15, 2015, at 1:53 AM, gjprabu <gjpr...@zohocorp.com
    <mailto:gjpr...@zohocorp.com>> wrote:

        Hi Tyler,

           Can please send me the next setup action to be taken on
        this issue.

        Regards
        Prabu


        ---- On Wed, 14 Oct 2015 13:43:29 +0530 *gjprabu
        <gjpr...@zohocorp.com <mailto:gjpr...@zohocorp.com>>* wrote ----

            Hi Tyler,

                     Thanks for your reply. We have disabled rbd_cache
            but still issue is persist. Please find our configuration
            file.

            # cat /etc/ceph/ceph.conf
            [global]
            fsid = 944fa0af-b7be-45a9-93ff-b9907cfaee3f
            mon_initial_members = integ-hm5, integ-hm6, integ-hm7
            mon_host = 192.168.112.192,192.168.112.193,192.168.112.194
            auth_cluster_required = cephx
            auth_service_required = cephx
            auth_client_required = cephx
            filestore_xattr_use_omap = true
            osd_pool_default_size = 2

            [mon]
            mon_clock_drift_allowed = .500

            [client]
            rbd_cache = false

            
--------------------------------------------------------------------------------------

             cluster 944fa0af-b7be-45a9-93ff-b9907cfaee3f
                 health HEALTH_OK
                 monmap e2: 3 mons at
            
{integ-hm5=192.168.112.192:6789/0,integ-hm6=192.168.112.193:6789/0,integ-hm7=192.168.112.194:6789/0}
                        election epoch 480, quorum 0,1,2
            integ-hm5,integ-hm6,integ-hm7
                 osdmap e49780: 2 osds: 2 up, 2 in
                  pgmap v2256565: 190 pgs, 2 pools, 1364 GB data, 410
            kobjects
                        2559 GB used, 21106 GB / 24921 GB avail
                             190 active+clean
              client io 373 kB/s rd, 13910 B/s wr, 103 op/s


            Regards
            Prabu

            ---- On Tue, 13 Oct 2015 19:59:38 +0530 *Tyler Bishop
            <tyler.bis...@beyondhosting.net
            <mailto:tyler.bis...@beyondhosting.net>>* wrote ----

                You need to disable RBD caching.



                        

                *Tyler Bishop
                *Chief Technical Officer
                513-299-7108 x10

                tyler.bis...@beyondhosting.net
                <mailto:tyler.bis...@beyondhosting.net>

                If you are not the intended recipient of this
                transmission you are notified that disclosing,
                copying, distributing or taking any action in reliance
                on the contents of this information is strictly
                prohibited.



                
------------------------------------------------------------------------

                *From: *"gjprabu" <gjpr...@zohocorp.com
                <mailto:gjpr...@zohocorp.com>>
                *To: *"Frédéric Nass" <frederic.n...@univ-lorraine.fr
                <mailto:frederic.n...@univ-lorraine.fr>>
                *Cc: *"<ceph-users@lists.ceph.com
                <mailto:ceph-users@lists.ceph.com>>"
                <ceph-users@lists.ceph.com
                <mailto:ceph-users@lists.ceph.com>>, "Siva Sokkumuthu"
                <sivaku...@zohocorp.com
                <mailto:sivaku...@zohocorp.com>>, "Kamal Kannan
                Subramani(kamalakannan)" <ka...@manageengine.com
                <mailto:ka...@manageengine.com>>
                *Sent: *Tuesday, October 13, 2015 9:11:30 AM
                *Subject: *Re: [ceph-users] ceph same rbd on multiple
                client

                Hi ,

                 We have CEPH  RBD with OCFS2 mounted servers. we are
                facing i/o errors simultaneously while move the folder
                using one nodes in the same disk other nodes data
                replicating with below said error (Copying is not
                having any problem). Workaround if we remount the
                partition this issue get resolved but after sometime
                problem again reoccurred. please help on this issue.

                Note : We have total 5 Nodes, here two nodes working
                fine other nodes are showing like below input/output
                error on moved data's.

                ls -althr
                ls: cannot access LITE_3_0_M4_1_TEST: Input/output error
                ls: cannot access LITE_3_0_M4_1_OLD: Input/output error
                total 0
                d????????? ? ? ? ? ? LITE_3_0_M4_1_TEST
                d????????? ? ? ? ? ? LITE_3_0_M4_1_OLD

                Regards
                Prabu


                ---- On Fri, 22 May 2015 17:33:04 +0530 *Frédéric Nass
                <frederic.n...@univ-lorraine.fr
                <mailto:frederic.n...@univ-lorraine.fr>>* wrote ----

                    Hi,

                    Waiting for CephFS, you can use clustered
                    filesystem like OCFS2 or GFS2 on top of RBD
                    mappings so that each host can access the same
                    device and clustered filesystem.

                    Regards,

                    Frédéric.

                    Le 21/05/2015 16:10, gjprabu a écrit :


-- Frédéric Nass

                    Sous direction des Infrastructures,
                    Direction du Numérique,
                    Université de Lorraine.

                    Tél : 03.83.68.53.83

                    _______________________________________________
                    ceph-users mailing list
                    ceph-users@lists.ceph.com
                    <mailto:ceph-users@lists.ceph.com>
                    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


                        Hi All,

                                We are using rbd and map the same rbd
                        image to the rbd device on two different
                        client but i can't see the data until i umount
                        and mount -a partition. Kindly share the
                        solution for this issue.

                        *Example*
                        create rbd image named foo
                        map foo to /dev/rbd0 on server A,   mount
                        /dev/rbd0 to /mnt
                        map foo to /dev/rbd0 on server B,   mount
                        /dev/rbd0 to /mnt

                        Regards
                        Prabu



                        _______________________________________________
                        ceph-users mailing list
                        ceph-users@lists.ceph.com
                        <mailto:ceph-users@lists.ceph.com>  
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



                _______________________________________________
                ceph-users mailing list
                ceph-users@lists.ceph.com
                <mailto:ceph-users@lists.ceph.com>
                http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to