I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.

I have eight 10GB drives. 

When I installed SX:CE (snv_91) I chose "3" ("Solaris Interactive Text (Desktop 
Session)) and the installer found all my drives but I told it to only use two - 
giving me a 10GB mirrored rpool.

Immediately prior to the installation commenced I shelled and typed this to 
enable compression:
# zfs set compression=on rpool

(Note: the editor removes multiple spaces and compresses this charts wrecking 
the nice layout)

That trick gives me this (great for tiny drives):

# zfs get -r compression
NAME                           PROPERTY     VALUE             SOURCE
rpool                              compression    on                     local
rpool/ROOT                   compression    on                     inherited 
from rpool
rpool/ROOT/snv_91        compression    on                     inherited from 
rpool
rpool/ROOT/snv_91/var   compression    on                     inherited from 
rpool
rpool/dump                     compression    off                    local
rpool/export                    compression    on                     inherited 
from rpool
rpool/export/home           compression    on                     inherited 
from rpool
rpool/swap                     compression    on                     inherited 
from rpool

# zfs get -r compressratio
NAME                           PROPERTY       VALUE             SOURCE
rpool                              compressratio    1.56x                  -
rpool/ROOT                    compressratio    1.68x                  -
rpool/ROOT/snv_91        compressratio    1.68x                  -
rpool/ROOT/snv_91/var   compressratio    2.05x                  -
rpool/dump                     compressratio    1.00x                  -
rpool/export                    compressratio    1.47x                  -
rpool/export/home           compressratio    1.47x                  -
rpool/swap                     compressratio    1.00x                  -


I mention that so you don't wonder why my installation seems too small.


My "Bug Report" (or confusion about ZFS) is this:

I have 6 remaining 10 GB drives and I desire to "raid" 3 of them and "mirror" 
them to the other 3 to give me raid security and integrity with mirrored drive 
performance. I then want to move my "/export" directory to the new drive.
 

My SCSI drives are numbered c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 
c1t8d0 (Drive c1t7d0 was reserved so the number is skipped). 

I will type a few commands that I hope will provide some basic info (remember I 
am new to this so don't hesitate to ask for more info nor should you flame me 
for my foolishness :) ).


# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       1. c1t1d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       2. c1t2d0 <VMware,-VMware Virtual S-1.0-10.00GB>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       3. c1t3d0 <VMware,-VMware Virtual S-1.0-10.00GB>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       4. c1t4d0 <VMware,-VMware Virtual S-1.0-10.00GB>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       5. c1t5d0 <VMware,-VMware Virtual S-1.0-10.00GB>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       6. c1t6d0 <VMware,-VMware Virtual S-1.0-10.00GB>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
       7. c1t8d0 <VMware,-VMware Virtual S-1.0-10.00GB>
          /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): ^C


# zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
rpool                              4.36G  5.42G    35K  /rpool
rpool/ROOT                    3.09G  5.42G    18K  legacy
rpool/ROOT/snv_91        3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var   84.7M  5.42G  84.7M  /var
rpool/dump                     640M  5.42G   640M  -
rpool/export                   13.9M  5.42G    19K  /export
rpool/export/home          13.9M  5.42G  13.9M  /export/home
rpool/swap                     640M  6.05G    16K  -


# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool            ONLINE       0     0     0
          mirror         ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0

errors: No known data errors

==========

The "Bug" is that the drive sizes don't seem to add up correctly when I 
raid+mirror my drives.

The following display the sizes of three drives when mirrored or in raid 
configuration:

# zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0
# zfs list | grep temparray
temparray              97.2K  19.5G  1.33K  /temparray
# zpool destroy temparray

# zpool create temparray mirror c1t2d0 c1t4d0 c1t5d0
# zfs list | grep temparray
temparray              89.5K  9.78G     1K  /temparray
# zpool destroy temparray


So far so good. Now for what I wanted to do (raid + mirror (and move "/export" 
to the new drive)):


Some web page suggested I could do this (wrong):

# zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0
# zpool attach temparray mirror c1t2d0 c1t3d0
too many arguments


The man page says the correct syntax is this (still no cigar):

# zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0
# zpool attach temparray c1t2d0 c1t3d0
cannot attach c1t3d0 to c1t2d0: can only attach to mirrors and top-level disks


So lets combine everything on one line (like that's gonna work, but it did, 
sort of):


# zpool create temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 c1t8d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: both raidz and mirror vdevs are present

# zpool create -f temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 
c1t8d0
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        rpool             ONLINE       0     0     0
          mirror          ONLINE       0     0     0
            c1t0d0s0   ONLINE       0     0     0
            c1t1d0s0   ONLINE       0     0     0

errors: No known data errors

  pool: temparray
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        temparray   ONLINE       0     0     0
          raidz1       ONLINE       0     0     0
            c1t2d0    ONLINE       0     0     0
            c1t4d0    ONLINE       0     0     0
            c1t5d0    ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            c1t3d0    ONLINE       0     0     0
            c1t6d0    ONLINE       0     0     0
            c1t8d0    ONLINE       0     0     0

errors: No known data errors

# zfs list
NAME                           USED    AVAIL    REFER    MOUNTPOINT
rpool                              4.36G    5.42G      35K         /rpool
rpool/ROOT                    3.09G    5.42G     18K          legacy
rpool/ROOT/snv_91        3.09G    5.42G    3.01G         /
rpool/ROOT/snv_91/var   84.5M    5.42G   84.5M         /var
rpool/dump                     640M     5.42G    640M         -
rpool/export                    12.9M    5.42G      19K         /export
rpool/export/home           12.9M    5.42G   12.9M         /export/home
rpool/swap                      640M    6.05G      16K         -
temparray                       115K    29.3G    21.0K         /temparray


The question (Bug?) is "Shouldn't I get this instead ?

# zfs list | grep temparray
temparray              97.2K  19.5G  1.33K  /temparray

Why do I get 29.3G instead of 19.5G ? 

Thanks for any help,
Rob
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to