Hello Sascha,

Wednesday, February 14, 2007, 6:45:30 AM, you wrote:

SB> Am 13.02.2007 um 22:46 schrieb Ian Collins:

>> [EMAIL PROTECTED] wrote:
>>
>>> Hello,
>>>
>>> I switched my home server from Debian to Solaris. The main cause for
>>> this step was stability and ZFS.
>>> But now after the migration (why isn't it possible to mount a linux
>>> fs on Solaris???) I make a few benchmarks
>>> and now I thought about swtching back to Debian. First of all the
>>> hardware layout of my home server:
>>>
>>> Mainboard: Asus A7V8X-X
>>> CPU: AthlonXP 2400+
>>> Memory: 1.5GB
>>> Harddisks: 1x160GB (IDE, c0d1), 2x250GB (IDE, c1d0 + c1d1), 4x250GB
>>> (SATA-1, c2d0,c2d1,c3d0,c3d1)
>>> SATA Controller: SIL3114 (downgraded to the IDE-FW)
>>> Solaris nv_54
>>>
>>> Than I compiled the newest Version of bonnie++ and do some benchmarks
>>> first on an ZFS Mirror (/data/) created with
>>> the 250GB IDE disk:
>>>
>>> $ ./bonnie++ -d /data/ -s 4G -u root
>>> Using uid:0, gid:0.
>>> Version  1.03       ------Sequential Output------ --Sequential Input-
>>> --Random-
>>>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --  
>>> Block--
>>> --Seeks--
>>> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %
>>> CP  /sec %CP
>>>                           4G 17832  25 17013  33  4630  12 21778  38
>>> 26839  11  66.0   2
>>>
>> Looks like poor hardware, how was the pool built?  Did you give ZFS  
>> the
>> entire drive?
>>
>> On my nForce4 Athlon64 box with two 250G SATA drives,
>>
>> zpool status tank
>>   pool: tank
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>>         NAME        STATE     READ WRITE CKSUM
>>         tank        ONLINE       0     0     0
>>           mirror    ONLINE       0     0     0
>>             c3d0    ONLINE       0     0     0
>>             c4d0    ONLINE       0     0     0
>>
>> Version  1.03       ------Sequential Output------ --Sequential Input-
>> --Random-
>>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>> /sec %CP
>> bester           4G 45036  21 47972   8 32570   5 83134  80 97646  12
>> 253.9   0
>>
>> dd from the mirror gives about 77MB/s
>>
>> Ian.
>>

SB> I use the entire drive for the zpools:

SB>    pool: data
SB> state: ONLINE
SB> scrub: none requested
SB> config:

SB>          NAME        STATE     READ WRITE CKSUM
SB>          data        ONLINE       0     0     0
SB>            mirror    ONLINE       0     0     0
SB>              c1d0    ONLINE       0     0     0
SB>              c1d1    ONLINE       0     0     0

SB> errors: No known data errors

SB>    pool: srv
SB> state: ONLINE
SB> scrub: none requested
SB> config:

SB>          NAME        STATE     READ WRITE CKSUM
SB>          srv         ONLINE       0     0     0
SB>            raidz1    ONLINE       0     0     0
SB>              c2d0    ONLINE       0     0     0
SB>              c2d1    ONLINE       0     0     0
SB>              c3d0    ONLINE       0     0     0
SB>              c3d1    ONLINE       0     0     0

SB> how could I dd from the zpool's, where is the blockdevice?

There's no block device associated with a pool.

However you can create a zvol (man zfs)
or just create one large file.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to