I experimented with a few more things, but nothing helped. Someone said run a bonnie++ benchmark to verify the performance. bonnie++ basically told me what dd did, that svnd backed by a file is slow and svnd backed by a disk or partition is floppy disk slow.

Nonetheless, the bonnie++ results may provide some insight to the problem for an experienced guru. What I found interesting is that the CPU usage is really low for writes and rewrites when svnd is backed by the whole disk. This is also the slowest configuration.

It seems like there may be some alignment issues between the underlying storage device and the svnd device. That's why I was trying all combinations of block and fragment sizes, cylinders per group, geometries (CHS) for fdisk, etc. Hopefully someone can shed some light on this problem.


bonnie++ benchmark
------------------

wd0d (slow old disk)
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP foo.mokaz.com 40M 4956 13 4934 4 2950 2 8622 30 8754 3 183.1 0

wd1d (fast new disk)
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP foo.mokaz.com 40M 45424 97 42832 38 10362 7 26344 91 47501 17 366.7 1

svnd0a (ass. w/ wd1c; fdisk: used disk/bios geometry)
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP foo.mokaz.com 40M 230 3 235 2 3609 63 6786 67 8615 57 131.3 13

svnd0a (ass. w/ wd1c; fdisk: used OpenBSD MBR partition geometry)
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP foo.mokaz.com 40M 230 3 235 2 3641 61 6594 66 8637 58 137.6 13

svnd0a (ass. w/ wd1a; fdisk -c 6659 -h 5 -s 63 -i svnd0)
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP foo.mokaz.com 40M 1462 18 1751 18 5404 88 9551 89 13559 83 168.0 14

svnd0a (ass. w/ 500MB random data filled file on wd1d)
Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP foo.mokaz.com 40M 8085 99 9444 99 6324 93 10517 96 15885 97 202.6 15


Clint Pachl wrote:
Reading through the archives I have found several people say that encrypting via an svnd device isn't much slower than writing directly to a raw unencrypted disk. While I found this to be true for svnd devices backed by files, svnd devices backed by whole disks and disk partitions are extremely slow. I have tried tuning many parameters, namely the fragment and block size and the cylinders per group in the disklabel associated with the svnd, but nothing has improved the performance.

I am running 4.1 on a single i386 800MHz P3. The encrypting of an underlying device (file, partition, disk) works perfectly otherwise. I also double checked my procedure at https://www.mainframe.cx/~ckuethe/encrypted_disks.html.

Not knowing what to tune to speed things up, I started by using all combinations of the following in the svnd disklabel (assuming they get passed to newfs):

fragment size: 2K, 4K
block size: 16K, 32K
cyl. per group: 16, 1568, 1936, 4K, 8K, 16K (sometimes after newfs'ing, cpg was reset to some other value? That's where the 1568 and 1936 come from)

I have also tried mounting the svnd device using the async and noatime flags, but that doesn't really matter.

Using vnconfig, I also tried associating the svnd device with the raw direct access device (i.e. /dev/rwd1[ac]), but then fdisk'ing on the svnd device complains. I tried this because I thought there may be a double buffering issue.

I also tried encryption with and without a salt file, but that didn't make any noticeable difference.

Here are some write performance numbers using dd and cp:
* for dd I used block sizes of 512, 1K, 2K, 4K, 8K, 16K
* for cp I used the command `cd /<enc-dev>; time cp -R /bin /sbin .`
* all dd commands made files > 40MB, which is more than 4 times the disk's cache

Direct disk (no svnd)
 dd: 49MB/s - 100MB/s
 cp: 2.43s real

svnd backed by disk (wd1c)
 dd: 248K - 500K
 cp: 1m21.44s real

svnd backed by partition (wd1a)
 dd: 1.8MB/s - 2.8MB/s
 cp: 11.53s real

svnd backed by file
 dd: 8.6MB/s - 9.7MB/s
 cp: 2.66s real

The system was dedicated to these tests and the CPU was about 80% idle during the running of the dd and cp commands.

What I really want is to encrypt the whole disk or a single partition covering the whole disk. If I could get the write performance of the disk/partition up to "svnd backed by file" speeds, I would be happy. This is my network backup server where almost 20 machines backup to, so 1MB/s to 2MB/s just isn't going to cut it.

In case somebody asks, I want to encrypt my backup data because I periodically pull the disk and store it at my girlfriends office.

Any performance enhancing suggestions or alternate methods would be greatly appreciated. I have thought about encrypting each backup using openssl, but I would have to script something for that. I am looking for automation and I feel vnconfig with encryption does it, just not very quickly.

-pachl

Reply via email to