Hi Prabu,

I guess others like me are not familiar with this case that combine CEPH RBD and OCFS2.

We'd really like to help you. But I think ocfs2 developers cannot get any info about what happened
to ocfs2 from your descriptions.

So, I'm wondering if you can reproduce and tell us the steps. Once developers can reproduce it, it's likely be resolved;-) BTW, any dmesg log about ocfs2 especially the initial error message and stack
back trace will be helpful!

Thanks,
Eric

On 10/20/15 17:29, gjprabu wrote:
Hi

        We are looking forward to your input on this.

Regads
Prabu

--- On Fri, 09 Oct 2015 12:08:19 +0530 *gjprabu <gjpr...@zohocorp.com>* wrote ----





        Hi All,

                 Anybody pls help me on this issue.

        Regards
        Prabu




        ---- On Thu, 08 Oct 2015 12:33:57 +0530 *gjprabu
        <gjpr...@zohocorp.com <mailto:gjpr...@zohocorp.com>>* wrote ----



            Hi All,

                   We have CEPH  RBD with OCFS2 mounted servers. we
            are facing i/o errors simultaneously while move the data's
            in the same disk (Copying is not having any problem).
            Temporary we remount the partition and the issue get
            resolved but after sometime problem again reproduced. If
            anybody faced same issue. Please help us.

            Note : We have total 5 Nodes, here two nodes working fine
            other nodes are showing like below input/output error.

            ls -althr
            ls: cannot access MICKEYLITE_3_0_M4_1_TEST: Input/output
            error
            ls: cannot access MICKEYLITE_3_0_M4_1_OLD: Input/output error
            total 0
            d????????? ? ? ? ? ? MICKEYLITE_3_0_M4_1_TEST
            d????????? ? ? ? ? ? MICKEYLITE_3_0_M4_1_OLD

            cluster:
                   node_count=5
                   heartbeat_mode = local
                   name=ocfs2

            node:
                    ip_port = 7777
                    ip_address = 192.168.113.42
                    number = 1
                    name = integ-hm9
                    cluster = ocfs2

            node:
                    ip_port = 7777
                    ip_address = 192.168.112.115
                    number = 2
                    name = integ-hm2
                    cluster = ocfs2

            node:
                    ip_port = 7777
                    ip_address = 192.168.113.43
                    number = 3
                    name = integ-ci-1
                    cluster = ocfs2
            node:
                    ip_port = 7777
                    ip_address = 192.168.112.217
                    number = 4
                    name = integ-hm8
                    cluster = ocfs2
            node:
                    ip_port = 7777
                    ip_address = 192.168.112.192
                    number = 5
                    name = integ-hm5
                    cluster = ocfs2


            Regards
            Prabu



            _______________________________________________
            Ocfs2-users mailing list
            Ocfs2-users@oss.oracle.com
            <mailto:Ocfs2-users@oss.oracle.com>
            https://oss.oracle.com/mailman/listinfo/ocfs2-users




_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-users

_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to