Quoting Harry Palmer <tumblew...@fast-mail.org>:

Hi there.

I'm fairly new to openbsd and I'm hoping someone with better
understanding than me of how its disk handling works can help.

Beginning my effort to encrypt a 300GB drive in a 64bit Ultrasparc,
I followed these initial steps:

1. used disklabel to create a single slice "a" on the drive

2. made a file system with newfs (is it necessary to have so many
   backup superblocks?)

3. mounted sd2a on "/home/cy" and touched it with an empty file
     "/home/cy/cryptfile"

4. zeroed out the file (and efectively the drive) with
     "dd if=/dev/zero of=/home/cy/cryptfile bs=512"


Here's the (eventual!) output of (4):

 /home/cy: write failed, file system is full
 dd: /home/cy/cryptfile: No space left on device
 576520353+0 records in
 576520352+0 records out
 295178420224 bytes transferred in 19810.722 secs (14899932 bytes/sec)



Now I have:

 # disklabel sd2a
 # /dev/rsd2a:
 type: SCSI
 disk: SCSI disk
 label: MAW3300NC
 flags: vendor
 bytes/sector: 512
 sectors/track: 930
 tracks/cylinder: 8
 sectors/cylinder: 7440
 cylinders: 13217
 total sectors: 585937500
 rpm: 10025
 interleave: 1
 boundstart: 0
 boundend: 585937500
 drivedata: 0

 16 partitions:
 #                size           offset  fstype [fsize bsize  cpg]
   a:        585937200                0  4.2BSD   2048 16384    1
   c:        585937500                0  unused


and:

 # ls -l /home/cy
 total 576661216
 -rw-r--r--  1 root  wheel  295178420224 Jun 16 03:39 cryptfile


and:

 # df -h
 Filesystem     Size    Used   Avail Capacity  Mounted on
 /dev/sd0a     1007M   44.8M    912M     5%    /
 /dev/sd0k      247G    2.0K    235G     0%    /home
 /dev/sd0d      3.9G    6.0K    3.7G     0%    /tmp
 /dev/sd0f      2.0G    559M    1.3G    29%    /usr
 /dev/sd0g     1007M    162M    795M    17%    /usr/X11R6
 /dev/sd0h      5.9G    212K    5.6G     0%    /usr/local
 /dev/sd0j      2.0G    2.0K    1.9G     0%    /usr/obj
 /dev/sd0i      2.0G    2.0K    1.9G     0%    /usr/src
 /dev/sd0e      7.9G    7.7M    7.5G     0%    /var
 /dev/sd2a      275G    275G  -13.7G   105%    /home/cy



I have no understanding of this. I've never seen a df output
that tells me I'm using 13GB more space than the drive is
capable of holding.

I ask here because there's obviously potential for me to lose
data somewhere down the line. I'll be grateful if anyone can
explain where I've gone wrong.

I've seen the greater than 100% full on a UFS? filesystem before when
you exceed the size of the filesystem.  There is space in the
filesystem for "lost+found" and all those superblocks? you were
complaining about that can get overwritten if you write too much to a
partition.

So setting up your "dd" to actually stop before you overfill the
filesystem is what you need to do. (using bs=# count=# ... info you
can get before you start initializing your file with the df command
without the "-k or -h" to get number of blocks and block size)

I'm sure the fine people on these lists will correct me if I'm wrong
in my assumptions...  :-)

George Morgan

Reply via email to