Thanks Suni, that makes sense. The largest number I may mount
concurrently is 12, I just went a bit bigger for "possible" growth.
I appreciate the fast response.
Jerry
Sunil Mushran wrote:
-N 16 means 16 journals. I think it defaults to 256M journals. So
that's 4G. Do you plan to mount it on 16 nodes? If not, reduce that.
Other options is a smaller journal. But you have to be careful as a
small journal could limit your write thruput.
On Mon, Apr 15, 2013 at 1:37 PM, Jerry Smith <jds...@sandia.gov
<mailto:jds...@sandia.gov>> wrote:
Good afternoon,
I have an OEL 6.3 box with a few ocfs2 mounts mounted locally, and was
wondering what I should expect to lose via formatting etc from a disk
usage standpoint.
-bash-4.1$ df -h | grep ocfs2
/dev/dm-15 12G 1.3G 11G 11% /ocfs2/redo0
/dev/dm-13 120G 4.2G 116G 4% /ocfs2/software-master
/dev/dm-10 48G 4.1G 44G 9% /ocfs2/arch0
/dev/dm-14 2.5T 6.7G 2.5T 1% /ocfs2/ora01
/dev/dm-11 1.5T 5.7G 1.5T 1% /ocfs2/ora02
/dev/dm-17 100G 4.2G 96G 5% /ocfs2/ora03
/dev/dm-12 200G 4.3G 196G 3% /ocfs2/ora04
/dev/dm-16 3.0T 7.3G 3.0T 1% /ocfs2/orabak01
-bash-4.1$
For example ora04 is 196GB total, but with zero usage it shows
4.3GB used:
[root@oeldb10 ~]#df -h /ocfs2/ora04
Filesystem Size Used Avail Use% Mounted on
/dev/dm-12 200G 4.3G 196G 3% /ocfs2/ora04
[root@oeldb10 ~]#find /ocfs2/ora04/ | wc -l
3
[root@oeldb10 ~]#find /ocfs2/ora04/ -exec du -sh {} \;
0 /ocfs2/ora04/
0 /ocfs2/ora04/lost+found
0 /ocfs2/ora04/db66snlux
Filesystems formatted via
mkfs -t ocfs2 -N 16 --fs-features=xattr,local -L ${device} ${device}
Mount options
[root@oeldb10 ~]#mount |grep ora04
/dev/dm-12 on /ocfs2/ora04 type ocfs2
(rw,_netdev,nointr,user_xattr,heartbeat=none)
Thanks,
--Jerry
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com <mailto:Ocfs2-users@oss.oracle.com>
https://oss.oracle.com/mailman/listinfo/ocfs2-users
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-users