Alison, Can you perform the following on the node that is failing:
# date # /etc/init.d/o2cb stop # /etc/init.d/o2cb online # mount LABEL=u01_oradata /u02 And then copy the /var/log/messages file piece about 5 minutes before the output of the date command above and send it to me. Also, if you can cut/paste the output of the sequence above and attach to the email, that would help. Alison Jolley wrote: > I'm able to ping both IP addresses. I also am using the same ocfs2 > version. Also, I've redone the mounted command and I've attached a > file containing the output. > Thanks! > > Alison > > Marcos E. Matsunaga wrote: >> Alison, >> >> The output of mounted.ocfs2 are both from node 1 (tap2d2), but I >> believe you see the same UUID from node 0 (tap2d1). >> >> Make sure you can ping each other using the IP addresses you have >> specified on cluster.conf. >> >> Also, make sure both nodes have the same ocfs2 version (rpm -qa|grep >> ocfs2). >> >> >> >> Alison Jolley wrote: >>> Marcos, >>> I've attached the error I get while starting up. As well as the >>> output from mounted.ocfs2 and the cluster.conf files. >>> >>> Here is the other information you have requested: >>> OS Version: Redhat 2.6.9-42.ELsmp >>> OCFS2 Version: 1.2.5-1 >>> Environment: EMC Clariion connected via fibre to 2 Dell 1950. >>> >>> Thanks! >>> >>> Alison >>> >>> Marcos E. Matsunaga wrote: >>>> Alison, >>>> >>>> You should use screen (depending on the distribution, it is >>>> installed automatically) while mounting on the second node and >>>> capture the output (ctrl-a and shift-h will start/stop the >>>> capture). That may give you a clue on what is going on. If you >>>> don't find the problem, please add some details like, kernel >>>> version, disk storage (iscsi, FC, scsi, etc), ocfs2 versions, and >>>> network interface that ocfs2 is using for DLM. Also, it would be >>>> interesting to see the /etc/ocfs2/cluster.conf from both nodes and >>>> the output of mounted.ocfs2 -d on both nodes. >>>> >>>> Alison Jolley wrote: >>>>> I'm having issues on ocfs2 mounting. I have a 2 node cluster, and >>>>> one of the clusters works fine. The other one is experiencing >>>>> "flakey" behavior. Upon server startup, it attempts to mount the >>>>> drive, however it fails and produces an error which scrolls by too >>>>> fast (I checked /var/log/messages and /var/log/dmesg with no >>>>> luck). I can immediately mount the drive after startup, which >>>>> produces no errors, however it unmounts itself within a matter of >>>>> hours (again - no errors in messages or dmesg). Is there another >>>>> log I should look at? Does anyone have any ideas as to why this >>>>> keeps failing? >>>>> thanks! >>>>> >>>>> Alison >>>>> _______________________________________________ >>>>> Ocfs2-users mailing list >>>>> [email protected] >>>>> http://oss.oracle.com/mailman/listinfo/ocfs2-users >>>> >>>> -- >>>> >>>> Regards, >>>> >>>> Marcos Eduardo Matsunaga >>>> >>>> Oracle USA >>>> Linux Engineering >>>> >>>> >>>> >> >> -- >> >> Regards, >> >> Marcos Eduardo Matsunaga >> >> Oracle USA >> Linux Engineering >> >> >> > _______________________________________________ > Ocfs2-users mailing list > [email protected] > http://oss.oracle.com/mailman/listinfo/ocfs2-users -- Regards, Marcos Eduardo Matsunaga Oracle USA Linux Engineering
_______________________________________________ Ocfs2-users mailing list [email protected] http://oss.oracle.com/mailman/listinfo/ocfs2-users
