Am 13.02.2007 um 22:46 schrieb Ian Collins:

[EMAIL PROTECTED] wrote:

Hello,

I switched my home server from Debian to Solaris. The main cause for
this step was stability and ZFS.
But now after the migration (why isn't it possible to mount a linux
fs on Solaris???) I make a few benchmarks
and now I thought about swtching back to Debian. First of all the
hardware layout of my home server:

Mainboard: Asus A7V8X-X
CPU: AthlonXP 2400+
Memory: 1.5GB
Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB
(SATA-1, c2d0,c2d1,c3d0,c3d1)
SATA Controller: SIL3114 (downgraded to the IDE-FW)
Solaris nv_54

Than I compiled the newest Version of bonnie++ and do some benchmarks
first on an ZFS Mirror (/data/) created with
the 250GB IDE disk:

$ ./bonnie++ -d /data/ -s 4G -u root
Using uid:0, gid:0.
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- -- Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %
CP  /sec %CP
                          4G 17832  25 17013  33  4630  12 21778  38
26839  11  66.0   2

Looks like poor hardware, how was the pool built? Did you give ZFS the
entire drive?

On my nForce4 Athlon64 box with two 250G SATA drives,

zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c3d0    ONLINE       0     0     0
            c4d0    ONLINE       0     0     0

Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
bester           4G 45036  21 47972   8 32570   5 83134  80 97646  12
253.9   0

dd from the mirror gives about 77MB/s

Ian.


I use the entire drive for the zpools:

  pool: data
state: ONLINE
scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c1d0    ONLINE       0     0     0
            c1d1    ONLINE       0     0     0

errors: No known data errors

  pool: srv
state: ONLINE
scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        srv         ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c2d0    ONLINE       0     0     0
            c2d1    ONLINE       0     0     0
            c3d0    ONLINE       0     0     0
            c3d1    ONLINE       0     0     0

how could I dd from the zpool's, where is the blockdevice?

sascha
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to