Testing OCFS2 on a 2 node cluster. Both nodes are HP DL380G5 with RHEL4U3 x86_64 (2.6.9-34.ELsmp). OCFS2 version 1.2.5-1. Shared storage is MSA1000.
Formatted a 142GB volume with: mkfs.ocfs2 -b 4K -C 32K -N 4 -L u01 /dev/sda1 Mounted the formatted partition on both nodes: mount -t ocfs2 /dev/sda1 /u01 Create a file on oracle1: [EMAIL PROTECTED] bin]# echo test > /u01/test2 The file is visable on both the nodes: [EMAIL PROTECTED] bin]# ls -l /u01/test2 -rw-r--r-- 1 root root 5 Jun 6 13:22 /u01/test2 [EMAIL PROTECTED] bin]# ls -l /u01/test2 -rw-r--r-- 1 root root 5 Jun 6 13:22 /u01/test2 Created a file on oracle0: [EMAIL PROTECTED] bin]# echo test > /u01/test3 echo: write error: Input/output error OCFS2 marks the file system as read-only on oracle0. /var/log/messages from oracle0: Jun 6 11:07:05 oracle0 kernel: OCFS2: ERROR (device sda1): ocfs2_check_group_descriptor: Group Descriptor # 0 has bad signature Jun 6 11:07:05 oracle0 kernel: File system is now read-only due to the potential of on-disk corruption. Please run fsck.ocfs2 once the file system is unmounted. Unmount the partition on both nodes and run fsck: [EMAIL PROTECTED] bin]# /sbin/fsck.ocfs2 /dev/sda1 Checking OCFS2 filesystem in /dev/sda1: label: /u01 uuid: 64 ef 7f 3a b7 03 4f 8e 8b e2 82 aa 5b c2 cc ea number of blocks: 35563880 bytes per block: 4096 number of clusters: 4445485 bytes per cluster: 32768 max slots: 4 /dev/sda1 is clean. It will be checked after 20 additional mounts. Remounting the volume reproduces same scenerio. _______________________________________________ Ocfs2-users mailing list [email protected] http://oss.oracle.com/mailman/listinfo/ocfs2-users
