Re: [zfs-discuss] Weird drive configuration, how to improve the situation

2010-03-02 Thread Cindy Swearingen

Hi Thomas,

I see that Richard has suggested mirroring your existing pool by
attaching slices from your 1 TB disk if the sizing is right.

You mentioned file security and I think you mean protecting your data
from hardware failures. Another option is to get one more disk to
convert this non-redundant pool to a mirrored pool by attaching the 1 TB 
disk and another similarly sized disk. See the example below.


Another idea would be to create a new pool with the 1 TB disk and then
use zfs send/receive to send over the data from swamp, but this wouldn't 
work because you couldn't reuse swamp's disks by attaching the 500GB

disks to the new pool because they are smaller than the 1 TB disk.

Keep in mind that if you do recreate this pool as a mirrored
configuration:

mirror pool = 1 500GB + 1 500GB disks, total capacity is 500GB
mirror pool = 1 500GB + 1GB disks, total capacity is 500GB

Because of the unequal disk sizing, the mirrored pool capacity would
be equal to the smallest disk.

Thanks,

Cindy

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c2t7d0ONLINE   0 0 0
  c2t8d0ONLINE   0 0 0

errors: No known data errors
# zpool attach tank c2t7d0 c2t9d0
# zpool attach tank c2t8d0 c2t10d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Tue Mar  2 
14:32:21 2010

config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c2t7d0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0  56.5K resilvered

errors: No known data errors

On 03/02/10 12:58, Thomas W wrote:

Hi!

I'm new to ZFS so this may be (or certainly is) a kind of newbie question.

I started with a small server I built from parts I had left over.
I only had 2 500GB drives and wanted to go for space. So i just created a zpool 
without any option. That now looks like this.

NAMESTATE READ WRITE CKSUM
swamp   ONLINE   0 0 0
  c1d0  ONLINE   0 0 0
  c2d0  ONLINE   0 0 0

So far so good. But like always the provisional solution became a permanent 
solution. Now I have an extra 1TB disk that I can add to the system. And I want 
to go for file security.

How can I get the best out of this setup. Is there a way of mirroring the data 
automatically between those three drives?

Any help is appreciated but please don't tell me I have to delete anything ;)

Thanks a lot,
  Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hotplugging sata drives in opensolaris

2010-03-05 Thread Cindy Swearingen

Hi David,

I think installgrub is unhappy that no s2 exists on c7t1d0.

I would detach c7t1d0s0 from the pool and follow these steps
to relabel/repartition this disk:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

Then, reattach the disk and run installgrub after the disk is
resilvered.

Thanks,

Cindy


On 03/05/10 10:43, xx wrote:

i am attempting to follow the recipe in:
http://blogs.sun.com/sa/entry/hotplugging_sata_drives

the recipe copies the vtoc from the old drive to the new drive and then does an 
attach. when i get to the attach - the partition slices on the new drive 
overlap (the partition slices on the old drive overlap) - and i get an error 
message:
init...@dogpatch:~# zpool attach rpool c7t0d0s0 c7t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7t1d0s0 overlaps with /dev/dsk/c7t1d0s2

here is the vtoc of the original drive:
init...@dogpatch:~# prtvtoc /dev/rdsk/c7t0d0s2
* /dev/rdsk/c7t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 63 sectors/track
* 255 tracks/cylinder
* 16065 sectors/cylinder
* 36471 cylinders
* 36469 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 16065 16064
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 16065 585858420 585874484
2 5 01 0 585874485 585874484
8 1 01 0 16065 16064
init...@dogpatch:~#

the original opensolaris install would have partitioned and formatted this 
drive - so that partitions 0 and 2 overlap. to solve the problem, i used format 
to zero out s2 on the new drive - and then the attach for s0 worked. however - 
it caused the next step to fail:

init...@dogpatch:~$ installgrub /boot/grub/stage1 /boot/grub/stage2 
/dev/rdsk/c7t1d0s0
cannot open/stat device /dev/rdsk/c7t1d0s2

should i have zeroed s0 on the new drive and attached s2? what should i have 
done differently?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool on sparse files

2010-03-05 Thread Cindy Swearingen

Hi Greg,

You are running into this bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751

Currently, building a pool from files is not fully supported.

Thanks,

Cindy
On 03/05/10 16:15, Gregory Durham wrote:

Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick question. I have created a zpool on a sparse file, for 
instance:


zpool create stage c10d0s0
mount it to /media/stage
mkfile -n 500GB /media/stage/disks/disk.img
zpool create zfsStage /media/stage/disks/disk.img
I want to be able to take this file and back it up to tape. However, 
that is fine but if I reboot I see:


pool: zfsStage
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
zfsStage   UNAVAIL  0 0 0  
insufficient replicas
  /media/zfsStage/disks/disk1.img  UNAVAIL  0 0 0  
cannot open


I then tried:
$zpool online zfsStage /media/zfsStage/disks
 cannot open 'zfsStage': pool is unavailable
$ zpool online zfsStage /media/zfsStage/disks/disk1.img
 cannot open 'zfsStage': pool is unavailable

Am I doing something wrong? When I restore from tape how would I mount 
this if this sort of thing is coming up? Hopefully someone can help me out.


Thanks,
Greg













___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool on sparse files

2010-03-08 Thread Cindy Swearingen

Greg,

Sure lofiadm should work, but another underlying issue is that, 
currently, building pools on top of other pools can cause the

system to deadlock or panic.

This kind of configuration is just not supported or recommended
at this time.

Thanks,

Cindy




On 03/05/10 17:38, Gregory Durham wrote:
Great...will using lofiadm still cause this issue? either by using 
mkfile or by using dd makeing a sparse file? Thanks for the heads up!


On Fri, Mar 5, 2010 at 3:48 PM, Cindy Swearingen 
mailto:cindy.swearin...@sun.com>> wrote:


Hi Greg,

You are running into this bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751

Currently, building a pool from files is not fully supported.

Thanks,

Cindy

On 03/05/10 16:15, Gregory Durham wrote:

Hello all,
I am using Opensolaris 2009.06 snv_129
I have a quick question. I have created a zpool on a sparse
file, for instance:

zpool create stage c10d0s0
mount it to /media/stage
mkfile -n 500GB /media/stage/disks/disk.img
zpool create zfsStage /media/stage/disks/disk.img
I want to be able to take this file and back it up to tape.
However, that is fine but if I reboot I see:

pool: zfsStage
 state: UNAVAIL
status: One or more devices could not be opened.  There are
insufficient
   replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool
online'.
  see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

   NAME   STATE READ WRITE CKSUM
   zfsStage   UNAVAIL  0 0  
  0  insufficient replicas
 /media/zfsStage/disks/disk1.img  UNAVAIL  0 0  
  0  cannot open


I then tried:
$zpool online zfsStage /media/zfsStage/disks
cannot open 'zfsStage': pool is unavailable
$ zpool online zfsStage /media/zfsStage/disks/disk1.img
cannot open 'zfsStage': pool is unavailable

Am I doing something wrong? When I restore from tape how would I
mount this if this sort of thing is coming up? Hopefully someone
can help me out.

Thanks,
Greg













___
zfs-discuss mailing list
zfs-discuss@opensolaris.org <mailto:zfs-discuss@opensolaris.org>
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] why L2ARC device is used to store files ?

2010-03-08 Thread Cindy Swearingen


Good catch Eric, I didn't see this problem at first...

The problem here and Richard described it well is that the ctdp* devices
represent the larger fdisk partition, which might also contain a ctds*
device.

This means that in this configuration, c7t0d0p3 and c7t0d0s0, might
share the same blocks.

My advice would be to copy the data from both the hdd pool and rpool
and recreate these pools. My fear is that if you destroy the hdd pool,
you will clobber your rpool data. Recreate both pools by using the two
entire disks and get another disk for the cache device would be less
headaches all around.

ZFS warns when you attempt to create this config and we also have a
current CR to prevent creating pools on p* devices.

Thanks,

Cindy

On 03/06/10 15:42, Eric D. Mudama wrote:

On Sat, Mar  6 at  3:15, Abdullah Al-Dahlawi wrote:


  hdd ONLINE   0 0 0
c7t0d0p3  ONLINE   0 0 0

  rpool   ONLINE   0 0 0
c7t0d0s0  ONLINE   0 0 0


I trimmed your zpool status output a bit.

Are those two the same device?  I'm barely familiar with solaris
partitioning and labels... what's the difference between a slice and a
partition?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Assign Spares

2010-03-08 Thread Cindy Swearingen

Hi Tony,

Good questions...

Yes, you can assign a spare disk to multiple pools on the same system,
but not shared across systems.

The problem with sharing a spare disk with a root pool is that if the
spare kicks in, a boot block is not automatically applied. The
differences in the labels is probably another problem.

My advice is:

1. Mirror the root pool. If you want additional protection for the root
pool, then create a 3-way mirror (by attaching the spare disk).

2. Keep a spare disk physically connected to the system but not
logically connected to either pool as a spare. You would have to
manually replace if a disk in either pool fails. Plus, if a disk fails
in a root pool mirror, you should confirm that it has an SMI label.

Thanks,

Cindy

On 03/08/10 10:10, Tony MacDoodle wrote:

Can I assign a disk to multiple pools?

The only problem is one pool is "rpool" with an SMI label and the other 
pool is a standard ZFS pool?


Thanks





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can you manually trigger spares?

2010-03-08 Thread Cindy Swearingen

Hi Tim,

I'm not sure why your spare isn't kicking in, but you could manually
replace the failed disk with the spare like this:

# zpool replace fserv c7t5d0 c3t6d0

If you want to run with the spare for awhile, then you can also detach
the original failed disk like this:

# zpool detach fserv c7t5d0

I don't know why the device name changed either.

See a similar example below.

Thanks,

Cindy

# zpool create -f tank raidz2 c2t0d0 c2t1d0 c2t2d0 spare c2t3d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
spares
  c2t3d0AVAIL
# zpool replace tank c2t2d0 c2t3d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Mon Mar  8 
12:03:37 2010

config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  raidz2-0ONLINE   0 0 0
c2t0d0ONLINE   0 0 0
c2t1d0ONLINE   0 0 0
spare-2   ONLINE   0 0 0
  c2t2d0  ONLINE   0 0 0
  c2t3d0  ONLINE   0 0 0  91.5K resilvered
spares
  c2t3d0  INUSE currently in use

errors: No known data errors
# zpool detach tank c2t2d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Mon Mar  8 
12:03:37 2010

config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c2t3d0  ONLINE   0 0 0  91.5K resilvered

errors: No known data errors





On 03/08/10 11:33, Tim Cook wrote:
Is there a way to manually trigger a hot spare to kick in?  Mine doesn't 
appear to be doing so.  What happened is I exported a pool to reinstall 
solaris on this system.  When I went to re-import it, one of the drives 
refused to come back online.  So, the pool imported degraded, but it 
doesn't seem to want to use the hot spare... I've tried triggering a 
scrub to see if that would give it a kick, but no-go.


r...@fserv:~$ zpool status
  pool: fserv
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas 
exist for

the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: scrub completed after 3h19m with 0 errors on Mon Mar  8 02:28:08 
2010

config:

NAME  STATE READ WRITE CKSUM
fserv DEGRADED 0 0 0
  raidz2-0DEGRADED 0 0 0
c2t0d0ONLINE   0 0 0
c2t1d0ONLINE   0 0 0
c2t2d0ONLINE   0 0 0
c2t3d0ONLINE   0 0 0
c2t4d0ONLINE   0 0 0
c2t5d0ONLINE   0 0 0
c3t0d0ONLINE   0 0 0
c3t1d0ONLINE   0 0 0
c3t2d0ONLINE   0 0 0
c3t3d0ONLINE   0 0 0
c3t4d0ONLINE   0 0 0
12589257915302950264  UNAVAIL  0 0 0  was 
/dev/dsk/c7t5d0s0

spares
  c3t6d0  AVAIL

--Tim




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recover rpool

2010-03-09 Thread Cindy Swearingen

Hi D,

Is this a 32-bit system?

We were looking at your panic messages and they seem to indicate a
problem with memory and not necessarily a problem with the pool or
the disk. Your previous zpool status output also indicates that the
disk is okay.

Maybe someone with similar recent memory problems can advise.

Thanks,

Cindy

On 03/09/10 09:15, D. Pinnock wrote:

When I boot from a snv133 live cd and attempt to import the rpool it panics 
with this output:

Sun Microsystems Inc.   SunOS 5.11  snv_133 February 2010

j...@opensolaris:~$ pfexec su
Mar  9 03:11:37 opensolaris su: 'su root' succeeded for jack on /dev/console
j...@opensolaris:~# zpool import -f -o ro -o failmode=continue -R /mnt rpool

panic[cpu1]/thread=ff00086e0c60: BAD TRAP: type=e (#pf Page fault) 
rp=ff00086dfe60 addr=278 occurred in module "unix" due to a NULL pointer 
dereference

sched: #pf Page fault
Bad kernel fault at addr=0x278
pid=0, pc=0xfb862b6b, sp=0xff00086dff58, eflags=0x10246
cr0: 8005003b cr4: 6f8
cr2: 278cr3: c80cr8: c

rdi:  278 rsi:4 rdx: ff00086e0c60
rcx:0  r8:   40  r9:21d9a
rax:0 rbx:0 rbp: ff00086dffb0
r10:   7f6fc8 r11:   6e r12:0
r13:  278 r14:4 r15: ff01cfe27e08
fsb:0 gsb: ff01ccfa5080  ds:   4b
 es:   4b  fs:0  gs:  1c3
trp:e err:2 rip: fb862b6b
 cs:   30 rfl:10246 rsp: ff00086dff58
 ss:   38

ff00086dfd40 unix:die+dd ()
ff00086dfe50 unix:trap+177e ()
ff00086dfe60 unix:cmntrap+e6 ()
ff00086dffb0 unix:mutex_enter+b ()
ff00086dffd0 zfs:zio_buf_alloc+2c ()
ff00086e0010 zfs:arc_get_data_buf+173 ()
ff00086e0060 zfs:arc_buf_alloc+a2 ()
ff00086e0100 zfs:arc_read_nolock+12f ()
ff00086e01a0 zfs:arc_read+75 ()
ff00086e0230 zfs:scrub_prefetch+b9 ()
ff00086e02f0 zfs:scrub_visitbp+5f1 ()
ff00086e03b0 zfs:scrub_visitbp+6e3 ()
ff00086e0470 zfs:scrub_visitbp+6e3 ()
ff00086e0530 zfs:scrub_visitbp+6e3 ()
ff00086e05f0 zfs:scrub_visitbp+6e3 ()
ff00086e06b0 zfs:scrub_visitbp+6e3 ()
ff00086e0750 zfs:scrub_visitdnode+84 ()
ff00086e0810 zfs:scrub_visitbp+1a6 ()
ff00086e0860 zfs:scrub_visit_rootbp+4f ()
ff00086e08c0 zfs:scrub_visitds+7e ()
ff00086e0a80 zfs:dsl_pool_scrub_sync+163 ()
ff00086e0af0 zfs:dsl_pool_sync+25b ()
ff00086e0ba0 zfs:spa_sync+36f ()
ff00086e0c40 zfs:txg_sync_thread+24a ()
ff00086e0c50 unix:thread_start+8 ()

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread Cindy Swearingen

Hi Harry,

Reviewing other postings where permanent errors where found on redundant 
ZFS configs, one was resolved by re-running the zpool scrub and one

resolved itself because the files with the permanent errors were most
likely temporary files.

One of the files with permanent errors below is a snapshot and the other
looks another backup.

I would recommend the top section of this troubleshooting wiki to
determine if hardware issues are causing these permanent errors:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

If it turns out that some hardware problem, power failure, or other
event caused these errors and if rerunning the scrub doesn't remove
these files, then I would remove them manually (if you have copies of
the data somewhere else).

Thanks,

Cindy


On 03/09/10 10:08, Harry Putnam wrote:

[I hope this isn't a repost double whammy.  I posted this message
under `Message-ID: <87fx4ai5sp@newsguy.com>' over 15 hrs ago but
it never appeared on my nntp server (gmane) far as I can see]

I'm a little at a loss here as to what to do about these two errors
that turned up during a scrub.

The discs involved are a matched pair in mirror mode.

zpool status -v z3 (wrapped for mail):
----   ---=---   -   
scrub: scrub completed after 1h48m with 2 errors on Mon Mar 8

10:26:49 2010 config:

  NAMESTATE READ WRITE CKSUM
  z3  ONLINE   0 0 2
mirror-0  ONLINE   0 0 4
  c5d0ONLINE   0 0 4
  c6d0ONLINE   0 0 4

   errors: Permanent errors have been detected in the following files:
   [NOTE: Edited to ease reading -ed -hp]
z3/proje...@zfs-auto-snap:monthly-2009-08-30-09:26:/Training/\
[... huge path snipped ...]/2_Database.mov

/t/bk-test-DiskDamage-021710_005252/rsnap/misc/hourly.4/\
[... huge path snipped ...]/es.utf-8.sug

----   ---=---   -   


Those are just two on disk files.

Can it be as simple as just deleting them?

Or is something more technical required.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what to do when errors occur during scrub

2010-03-09 Thread Cindy Swearingen
Hi Harry,

Part of my job is to make this stuff easier but we have a some hurdles to 
go in the area of device management and troubleshooting.

The relevant zfs-discuss threads include 'permanent errors' if that helps. 

You might try running zpool clear on the pool between the scrubs.
I will add this as a recovery step in the t/s wiki.

I believe you'll see the hex output if the files no longer exist.

You could try to using this syntax to isolate any recent errors on your 
devices:

# fmdump -eV | grep c5d0
# fmdump -eV | grep c6d0

Cindy


- Original Message -
From: Harry Putnam 
Date: Tuesday, March 9, 2010 4:00 pm
Subject: Re: [zfs-discuss] what to do when errors occur during scrub
To: zfs-discuss@opensolaris.org

> Cindy Swearingen  writes:
> 
> > Hi Harry,
> >
> > Reviewing other postings where permanent errors where found on
> > redundant ZFS configs, one was resolved by re-running the zpool scrub
> > and one
> > resolved itself because the files with the permanent errors were most
> > likely temporary files.
> 
> what search strings did you use to find those?... I always seem to use
> search strings that miss what I'm after its helpful to see how
> others conduct searches.
> 
> > One of the files with permanent errors below is a snapshot and the other
> > looks another backup.
> >
> > I would recommend the top section of this troubleshooting wiki to
> > determine if hardware issues are causing these permanent errors:
> 
> > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
> 
> A lot of that seems horribly complex for what is apparently (and this
> may turn out to be wishful thinking) a pretty minor problem.  But it
> does say that repeated scrubs will most likely remove all traces of
> corruption (assuming its not caused by hardware).  However I see no
> evidence that the `scrub' command is doing anything at all (more on
> that below).
> 
> I decided to take the line of least Resistance and simply deleted the
> file.
> 
> As you guessed, they were backups and luckily for me, redundant. 
> 
> So following a scrub... I see errors that look more technical.
> 
> But first the info given by `zpool status' appears to either be
> referencing a earlier scrub or is seriously wrong in what it reports.
> 
>   root # zpool status -vx z3
> pool: z3
>state: ONLINE
>   status: One or more devices has experienced an error resulting in data
>   corruption.  Applications may be affected.
> 
>   action: Restore the file in question if possible.  Otherwise restore
> the entire pool from backup.  see:
> 
> http://www.sun.com/msg/ZFS-8000-8A scrub: scrub completed after
> 1h48m with 2 errors on Mon Mar 8 10:26:49 2010 config:
> 
> ----   ---=---   -   
> 
> I just ran a scrub moments ago, but `status' is still reporting one 
> from earlier in the day.  It says 1HR and 48 minutes but that is
> completely wrong too.
> 
> ----   ---=---   -   
> 
>   NAMESTATE READ WRITE CKSUM
>   z3  ONLINE   0 0 2
> mirror-0  ONLINE   0 0 4
>   c5d0ONLINE   0 0 4
>   c6d0ONLINE   0 0 4
>   
>   errors: Permanent errors have been detected in the following files:
>   
>   <0x42>:<0x552d>
>   z3/t:<0xe1d99f>
> ----   ---=---   -   
> 
> The `status' report, even though it seems to have bogus information
> about the scrub, does show different output for the errors.
> 
> Are those hex addresses of devices or what?  There is nothing at all
> on z3/t  
> 
> Also - it appears `zpool scrub -s z3' doesn't really do anything.
> 
> The status report above is taken immediately after a scrub command.
> 
> The `scub -s' command just returns the prompt... no output and
> apparently no scrub either.
> 
> Does the failure to scrub indicate it cannot be scrubbed?  Does a
> status report that shows the pool on line and not degraded really mean
> anything is that just as spurious as the scrub info there?
> 
> Sorry if I seem like a lazy dog but I don't really see a section in
> the trouble shooting (from viewing the outline of sections) that
> appears to deal with directly with scrubbing.
> 
> Apparently I'm supposed to read and digest the whole thing so as to
> know what to do... but I get quickly completely lost in the
> discussion.
> 
> They say to use fmdump for a list of defective hardware... but I

Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen

Hi Grant,

I don't have a v240 to test but I think you might need to unconfigure
the disk first on this system.

So I would follow the more complex steps.

If this is a root pool, then yes, you would need to use the slice
identifier, and make sure it has an SMI disk label.

After the zpool replace operation and the disk resilvering is
complete, apply the boot blocks.

The steps would look like this:

# zpool offline rpool c2t1d0
#cfgadm -c unconfigure c1::dsk/c2t1d0
(physically replace the drive)
(confirm an SMI label and a s0 exists)
# cfgadm -c configure c1::dsk/c2t1d0
# zpool replace rpool c2t1d0s0
# zpool online rpool c2t1d0s0
# zpool status rpool /* to confirm the replacement/resilver is complete
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0

Thanks,

Cindy


On 03/10/10 13:28, Grant Lowe wrote:

Please help me out here. I've got a V240 with the root drive, c2t0d0 mirrored 
to c2t1d0. The mirror is having problems, and I'm unsure of the exact procedure 
to pull the mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen

Hey list,

Grant says his system is hanging after the zpool replace on a v240, 
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.


No errors from zpool replace so it sounds like the disk was physically
replaced successfully.

If anyone else can comment or help Grant diagnose this issue, please
feel free...

Thanks,

Cindy

On 03/10/10 16:19, Grant Lowe wrote:

Well, this system is Solaris 05/09, with patches form November. No snapshots 
running and no internal contollers. It's a file serving and attached to a HDS 
disk array. Help and please respond ASAP as this is production! Even an IM 
would be helpful.

--- On Wed, 3/10/10, Cindy Swearingen  wrote:


From: Cindy Swearingen 
Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
To: "Grant Lowe" 
Cc: zfs-discuss@opensolaris.org
Date: Wednesday, March 10, 2010, 1:09 PM
Hi Grant,

I don't have a v240 to test but I think you might need to
unconfigure
the disk first on this system.

So I would follow the more complex steps.

If this is a root pool, then yes, you would need to use the
slice
identifier, and make sure it has an SMI disk label.

After the zpool replace operation and the disk resilvering
is
complete, apply the boot blocks.

The steps would look like this:

# zpool offline rpool c2t1d0
#cfgadm -c unconfigure c1::dsk/c2t1d0
(physically replace the drive)
(confirm an SMI label and a s0 exists)
# cfgadm -c configure c1::dsk/c2t1d0
# zpool replace rpool c2t1d0s0
# zpool online rpool c2t1d0s0
# zpool status rpool /* to confirm the replacement/resilver
is complete
# installboot -F zfs /usr/platform/`uname
-i`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0

Thanks,

Cindy


On 03/10/10 13:28, Grant Lowe wrote:

Please help me out here. I've got a V240 with the root

drive, c2t0d0 mirrored to c2t1d0. The mirror is having
problems, and I'm unsure of the exact procedure to pull the
mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice

as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-11 Thread Cindy Swearingen

Hi David,

In general, an I/O error means that the slice 0 doesn't exist
or some other problem exists with the disk.

The installgrub command is like this:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

Thanks,

Cindy

On 03/11/10 15:45, David L Kensiski wrote:

At Wed, 10 Mar 2010 15:28:40 -0800 Cindy Swearingen wrote:


Hey list,






Grant says his system is hanging after the zpool replace on a v240, 

running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.


No errors from zpool replace so it sounds like the disk was physically
replaced successfully.







If anyone else can comment or help Grant diagnose this issue, please



feel free...







Thanks,







Cindy




I'm had a similar problem -- swapped out the drive and now when I tried the zfs 
replace, I got an I/O error:


k01_dlk$ sudo zpool replace rpool c1t0d0s0
cannot open '/dev/dsk/c1t0d0s0': I/O error


I ran format and made sure the partition table matched the good root mirror, 
then was able to rerun the zpool replace:


k01_dlk$ sudo zpool replace -f rpool c1t0d0s0 c1t0d0s0
Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable.


Can I assume zfs_stage1_5 is correct?


k01_dlk$ cd /boot/grub/

k01_dlk$ ls
binfat_stage1_5  jfs_stage1_5nbgrub 
stage1   ufs_stage1_5
capability ffs_stage1_5  menu.lstpxegrub
stage2   vstafs_stage1_5
defaultinstall_menu  menu.lst.orig   reiserfs_stage1_5  
stage2_eltorito  xfs_stage1_5
e2fs_stage1_5  iso9660_stage1_5  minix_stage1_5  splash.xpm.gz  
ufs2_stage1_5zfs_stage1_5


Installgrub succeeded, but I'm a tad nervous about rebooting until I know for 
sure.


k01_dlk$ sudo installgrub zfs_stage1_5 stage2 /dev/rdsk/c1t0d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 272 sectors starting at 50 (abs 16115)


Thanks,

--Dave






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Change a zpool to raidz

2010-03-12 Thread Cindy Swearingen

Ian,

You might consider converting this pool to a mirrored pool, which is
currently more flexible than a raidz pool and provide good performance.

Its easy too. See the example below.

Cindy

A non-redundant pool of one disk (33 GB).

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c0t0d0ONLINE   0 0 0

errors: No known data errors

Attach another disk to create a mirrored pool of two disks.
Total space is still 33 GB.

# zpool attach tank c0t0d0 c1t8d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Fri Mar 12 
08:20:30 2010

config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0  73.5K resilvered

errors: No known data errors
# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  33.8G  76.5K  33.7G 0%  ONLINE  -

Expand the pool by adding another two-disk mirror. Total space is
now 67.5 GB.

# zpool add tank mirror c1t9d0 c1t10d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Fri Mar 12 
08:20:30 2010

config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c0t0d0   ONLINE   0 0 0
c1t8d0   ONLINE   0 0 0  73.5K resilvered
  mirror-1   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0

errors: No known data errors
# zpool list tank
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  67.5G   100K  67.5G 0%  ONLINE  -




On 03/12/10 04:49, Ian Garbutt wrote:

Thats fair enough, pity there isn't a simpler way.

Many thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-12 Thread Cindy Swearingen

Hi David,

Make sure you test booting from the newly attached disk and update the
BIOS, if necessary.

The boot process occurs over several stages. I'm not sure what the
zfs_stage1_5 file provides but is the first two that we need for
booting.

For more details, go to this site:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/boot

And review Lori's slides at the bottom of this page.

Thanks,

cindy

On 03/12/10 08:41, David L Kensiski wrote:

On Mar 11, 2010, at 3:08 PM, Cindy Swearingen wrote:


Hi David,

In general, an I/O error means that the slice 0 doesn't exist
or some other problem exists with the disk.



Which makes complete sense because the partition table on the 
replacement didn't have anything specified for slice 0.





The installgrub command is like this:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0



I reran it like that.  For curiosity sake, what is the zfs_stage1_5 file 
file?





Thanks,

Cindy




Thanks you!

--Dave



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Posible newbie question about space between zpool and zfs file systems

2010-03-15 Thread Cindy Swearingen

Hi Michael,

For a RAIDZ pool, the zpool list command identifies the "inflated" space
for the storage pool, which is the physical available space without an
accounting for redundancy overhead.

The zfs list command identifies how much actual pool space is available
to the file systems.

See the example of a RAIDZ-2 pool created below with 3 44 GB disks.
The total pool capacity reported by zpool list is 134 GB. The amount of
pool space that is available to the file systems is 43.8 GB due to
RAIDZ-2 redundancy overhead.

See this FAQ section for more information.

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HZFSAdministrationQuestions

Why doesn't the space that is reported by the zpool list command and the 
zfs list command match?


Although this site is dog-slow for me today...

Thanks,

Cindy

# zpool create xpool raidz2 c3t40d0 c3t40d1 c3t40d2
# zpool list xpool
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
xpool   134G   234K   134G 0%  ONLINE  -
# zfs list xpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
xpool  73.2K  43.8G  20.9K  /xpool


On 03/15/10 08:38, Michael Hassey wrote:

Sorry if this is too basic -

So I have a single zpool in addition to the rpool, called xpool.

NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool   136G   109G  27.5G79%  ONLINE  -
xpool   408G   171G   237G42%  ONLINE  -

I have 408 in the pool, am using 171 leaving me 237 GB. 


The pool is built up as;

  pool: xpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
xpool   ONLINE   0 0 0
  raidz2ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0

errors: No known data errors


But - and here is the question -

Creating file systems on it, and the file systems in play report only 76GB of 
space free

<<<>>

xpool/zones/logserver/ROOT/zbe 975M  76.4G   975M  legacy
xpool/zones/openxsrvr 2.22G  76.4G  21.9K  /export/zones/openxsrvr
xpool/zones/openxsrvr/ROOT2.22G  76.4G  18.9K  legacy
xpool/zones/openxsrvr/ROOT/zbe2.22G  76.4G  2.22G  legacy
xpool/zones/puggles241M  76.4G  21.9K  /export/zones/puggles
xpool/zones/puggles/ROOT   241M  76.4G  18.9K  legacy
xpool/zones/puggles/ROOT/zbe   241M  76.4G   241M  legacy
xpool/zones/reposerver 299M  76.4G  21.9K  /export/zones/reposerver


So my question is, where is the space from xpool being used? or is it?


Thanks for reading.

Mike.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-16 Thread Cindy Swearingen

Hi Svein,

Here's a couple of pointers:

http://wikis.sun.com/display/OpenSolarisInfo/comstar+Administration

http://blogs.sun.com/observatory/entry/iscsi_san

Thanks,

Cindy

On 03/16/10 12:15, Svein Skogen wrote:
Things used to be simple. 


"zfs create -V xxg -o shareiscsi=on pool/iSCSI/mynewvolume"

It worked.

Now we've got a new, feature-rich baby in town, called comstar, and so far all 
attempts at groking the excuse of a manpage has simply left me with a nasty 
headache.

_WHERE_ is the replacement one-line command that let me create a new 
iscsi-shared volume (I could then mount on my VMWare ESXi box)?

//Svein

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: clarification on meaning of the autoreplace property

2010-03-17 Thread Cindy Swearingen

Hi Dave,

I'm unclear about the autoreplace behavior with one spare that is
connected to two pools. I don't see how it could work if the autoreplace 
property is enabled on both pools, which formats and replaces a spare

disk that might be in-use in another pool (?) Maybe I misunderstand.

1. I think autoreplace behavior might be inconsistent when a device is
removed. CR 6935332 was filed recently but is not available yet through
our public bug database.

2. The current issue with adding a spare disk to a ZFS root pool is that 
if a root pool mirror disk fails and the spare kicks in, the bootblock

is not applied automatically. We're working on improving this
experience.

My advice would be to create a 3-way mirrored root pool until we have a
better solution for root pool spares.

3. For simplicity and ease of recovery, consider using your disks as
whole disks, even though you must use slices for the root pool.
If one disk is part of two pools and it fails, two pool are impacted. 
The beauty of ZFS is no longer having to deal with slice administration, 
except for the root pool.


I like your mirror pool configurations but I would simplify it by
converting store1 to using whole disks, and keep separate spare disks.`
One for the store1 pool, and either create a 3-way mirrored root pool
or keep a spare disk connected to the system but unconfigured.

Thanks,

Cindy



On 03/17/10 10:25, Dave Johnson wrote:

From pages 29,83,86,90 and 284 of the 10/09 Solaris ZFS Administration
guide, it sounds like a disk designated as a hot spare will:
1. Automatically take the place of a bad drive when needed
2. The spare will automatically be detached back to the spare
   pool when a new device is inserted and brought up to replace the
   original compromised one.

Should this work the same way for slices?

I have four active disks in a RAID 10 configuration,
for a storage pool, and the same disks are used
for mirrored root configurations, but only
only one of the possible mirrored root slice
pairs is currently active.

I wanted to designate slices on a 5th disk as
hot spares for the two existing pools, so
after partitioning the 5th disk (#4) identical
to the four existing disks, I ran:

# zpool add rpool spare c0t4d0s0
# zpool add store1 spare c0t4d0s7
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s0  ONLINE   0 0 0
c0t1d0s0  ONLINE   0 0 0
spares
  c0t4d0s0AVAIL

errors: No known data errors

  pool: store1
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
store1ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s7  ONLINE   0 0 0
c0t1d0s7  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t2d0s7  ONLINE   0 0 0
c0t3d0s7  ONLINE   0 0 0
spares
  c0t4d0s7AVAIL

errors: No known data errors
--
So It looked like everything was set up how I was
hoping until I emulated a disk failure by pulling
one of the online disks. The root pool responded
how I expected, but the storage pool, on slice 7,
did not appear to perform the autoreplace:

Not too long after pulling one of the online disks:


# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress for 0h0m, 10.02% done, 0h5m to go
config:

NAMESTATE READ WRITE CKSUM
rpool   DEGRADED 0 0 0
  mirrorDEGRADED 0 0 0
c0t0d0s0ONLINE   0 0 0
spare   DEGRADED84 0 0
  c0t1d0s0  REMOVED  0 0 0
  c0t4d0s0  ONLINE   0 084  329M resilvered
spares
  c0t4d0s0  INUSE currently in use

errors: No known data errors

  pool: store1
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
store1ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s7  ONLINE   0 0 0
c0t1d0s7  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t2d0s7  ONLINE   0 0 0
c0t3d0s7  ONLINE   0 0 0
spares
  c0t4d0s7AVAIL

errors: No known data errors

I w

Re: [zfs-discuss] zpool I/O error

2010-03-19 Thread Cindy Swearingen

Hi Grant,

An I/O error generally means that there is some problem either accessing 
the disk or disks in this pool, or a disk label got clobbered.


Does zpool status provide any clues about what's wrong with this pool?

Thanks,

Cindy

On 03/19/10 10:26, Grant Lowe wrote:

Hi all,

I'm trying to delete a zpool and when I do, I get this error:

# zpool destroy oradata_fs1
cannot open 'oradata_fs1': I/O error
# 


The pools I have on this box look like this:

#zpool list
NAME  SIZE   USED  AVAILCAP  HEALTH  ALTROOT
oradata_fs1   532G   119K   532G 0%  DEGRADED  -
rpool 136G  28.6G   107G21%  ONLINE  -
#

Why can't I delete this pool? This is on Solaris 10 5/09 s10s_u7.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ2 configuration

2010-03-31 Thread Cindy Swearingen

Hi Ned,

If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example with supporting text, here:

http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view

Can you restate the problem with this page?

Thanks,

Cindy


On 03/26/10 05:42, Edward Ned Harvey wrote:
Just because most people are probably too lazy to click the link, I’ll 
paste a phrase from that sun.com webpage below:


“Creating a single-parity RAID-Z pool is identical to creating a 
mirrored pool, except that the ‘raidz’ or ‘raidz1’ keyword is used 
instead of ‘mirror’.”


And

“zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0”

 

So … Shame on you, Sun, for doing this to your poor unfortunate 
readers.  It would be nice if the page were a wiki, or somehow able to 
have feedback submitted…


 

 

 

*From:* zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Bruno Sousa

*Sent:* Thursday, March 25, 2010 3:28 PM
*To:* Freddie Cash
*Cc:* ZFS filesystem discussion list
*Subject:* Re: [zfs-discuss] RAIDZ2 configuration

 

Hmm...it might be completely wrong , but the idea of raidz2 vdev with 3 
disks came from the reading of 
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view .


This particular page has the following example :

*zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0*

# *zpool status -v tank*

  pool: tank

 state: ONLINE

 scrub: none requested

config:

 


NAME  STATE READ WRITE CKSUM

tank  ONLINE   0 0 0

  raidz2  ONLINE   0 0 0

c1t0d0ONLINE   0 0 0

c2t0d0ONLINE   0 0 0

c3t0d0ONLINE   0 0 0

 

So...what am i missing here? Just a bad example in the sun documentation 
regarding zfs?


Bruno

On 25-3-2010 20:10, Freddie Cash wrote:

On Thu, Mar 25, 2010 at 11:47 AM, Bruno Sousa > wrote:


What do you mean by "Using fewer than 4 disks in a raidz2 defeats the 
purpose of raidz2, as you will always be in a degraded mode" ? Does it 
means that having 2 vdevs with 3 disks it won't be redundant in the 
advent of a drive failure?


 

raidz1 is similar to raid5 in that it is single-parity, and requires a 
minimum of 3 drives (2 data + 1 parity)


raidz2 is similar to raid6 in that it is double-parity, and requires a 
minimum of 4 drives (2 data + 2 parity)


 

IOW, a raidz2 vdev made up of 3 drives will always be running in 
degraded mode (it's missing a drive).


 


--

Freddie Cash
fjwc...@gmail.com 

--
This message has been scanned for viruses and
dangerous content by *MailScanner* , and is
believed to be clean.

 

 


___

zfs-discuss mailing list

zfs-discuss@opensolaris.org 

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  

 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ2 configuration

2010-04-01 Thread Cindy Swearingen

Hi Bruno,

I agree that the raidz2 example on this page is weak and I will provide
a better one.

ZFS is very flexible and can be configured many different ways.

If someone new to ZFS wants to take 3 old (but reliable) disks and make
a raidz2 configuration for testing, we would not consider this is a
nonsensical idea.

If I had only 3 disks, I would create a mirrored configuration of two
disks and keep one as a spare.

Thanks,

Cindy

On 03/31/10 16:02, Bruno Sousa wrote:

Hi Cindy,

This all issue started when i asked opinion in this list in how should i
create zpools. It seems that one of my initial ideas of creating a vdev
with 3 disks in a raidz configuration seems to be a non-sense configuration.
Somewhere along the way i "defended" my initial idea with the fact that
the documentation from Sun has as an example such configuration as seen
here :


*zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0*  at
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view

So if by concept the idea of having a vdev with 3 disks within a raidz
configuration is a bad one, the oficial Sun documentation should not
have such example. However if people made such example in Sun
documentation, perhaps this all idea is not that bad at all..

Can you provide anything on this subject?

Thanks,
Bruno




On 31-3-2010 23:49, Cindy Swearingen wrote:

Hi Ned,

If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example with supporting text, here:

http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view

Can you restate the problem with this page?

Thanks,

Cindy


On 03/26/10 05:42, Edward Ned Harvey wrote:

Just because most people are probably too lazy to click the link,
I’ll paste a phrase from that sun.com webpage below:

“Creating a single-parity RAID-Z pool is identical to creating a
mirrored pool, except that the ‘raidz’ or ‘raidz1’ keyword is used
instead of ‘mirror’.”

And

“zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0”

 


So … Shame on you, Sun, for doing this to your poor unfortunate
readers.  It would be nice if the page were a wiki, or somehow able
to have feedback submitted…

 

 

 


*From:* zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Bruno Sousa
*Sent:* Thursday, March 25, 2010 3:28 PM
*To:* Freddie Cash
*Cc:* ZFS filesystem discussion list
*Subject:* Re: [zfs-discuss] RAIDZ2 configuration

 


Hmm...it might be completely wrong , but the idea of raidz2 vdev with
3 disks came from the reading of
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view .

This particular page has the following example :

*zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0*

# *zpool status -v tank*

  pool: tank

 state: ONLINE

 scrub: none requested

config:

 


NAME  STATE READ WRITE CKSUM

tank  ONLINE   0 0 0

  raidz2  ONLINE   0 0 0

c1t0d0ONLINE   0 0 0

c2t0d0ONLINE   0 0 0

c3t0d0ONLINE   0 0 0

 


So...what am i missing here? Just a bad example in the sun
documentation regarding zfs?

Bruno

On 25-3-2010 20:10, Freddie Cash wrote:

On Thu, Mar 25, 2010 at 11:47 AM, Bruno Sousa mailto:bso...@epinfante.com>> wrote:

What do you mean by "Using fewer than 4 disks in a raidz2 defeats the
purpose of raidz2, as you will always be in a degraded mode" ? Does
it means that having 2 vdevs with 3 disks it won't be redundant in
the advent of a drive failure?

 


raidz1 is similar to raid5 in that it is single-parity, and requires
a minimum of 3 drives (2 data + 1 parity)

raidz2 is similar to raid6 in that it is double-parity, and requires
a minimum of 4 drives (2 data + 2 parity)

 


IOW, a raidz2 vdev made up of 3 drives will always be running in
degraded mode (it's missing a drive).

 


--

Freddie Cash
fjwc...@gmail.com <mailto:fjwc...@gmail.com>

--
This message has been scanned for viruses and
dangerous content by *MailScanner* <http://www.mailscanner.info/>,
and is
believed to be clean.

 

 


___

zfs-discuss mailing list

zfs-discuss@opensolaris.org <mailto:zfs-discuss@opensolaris.org>

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 
 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ2 configuration

2010-04-01 Thread Cindy Swearingen

Hi Bruno,

I agree that the raidz2 example on this page is weak and I will provide
a better one.

ZFS is very flexible and can be configured many different ways.

If someone new to ZFS wants to take 3 old (but reliable) disks and make
a raidz2 configuration for testing, we would not consider this is a
nonsensical idea. You can then apply what you learn about ZFS space
allocation and redundancy to a new configuration.

If I had only 3 disks, I would create a mirrored configuration of two
disks and keep one as a spare.

Thanks,

Cindy


On 03/31/10 16:02, Bruno Sousa wrote:

Hi Cindy,

This all issue started when i asked opinion in this list in how should i
create zpools. It seems that one of my initial ideas of creating a vdev
with 3 disks in a raidz configuration seems to be a non-sense configuration.
Somewhere along the way i "defended" my initial idea with the fact that
the documentation from Sun has as an example such configuration as seen
here :


*zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0*  at
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view

So if by concept the idea of having a vdev with 3 disks within a raidz
configuration is a bad one, the oficial Sun documentation should not
have such example. However if people made such example in Sun
documentation, perhaps this all idea is not that bad at all..

Can you provide anything on this subject?

Thanks,
Bruno




On 31-3-2010 23:49, Cindy Swearingen wrote:

Hi Ned,

If you look at the examples on the page that you cite, they start
with single-parity RAIDZ examples and then move to double-parity RAIDZ
example with supporting text, here:

http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view

Can you restate the problem with this page?

Thanks,

Cindy


On 03/26/10 05:42, Edward Ned Harvey wrote:

Just because most people are probably too lazy to click the link,
I’ll paste a phrase from that sun.com webpage below:

“Creating a single-parity RAID-Z pool is identical to creating a
mirrored pool, except that the ‘raidz’ or ‘raidz1’ keyword is used
instead of ‘mirror’.”

And

“zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0”

 


So … Shame on you, Sun, for doing this to your poor unfortunate
readers.  It would be nice if the page were a wiki, or somehow able
to have feedback submitted…

 

 

 


*From:* zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Bruno Sousa
*Sent:* Thursday, March 25, 2010 3:28 PM
*To:* Freddie Cash
*Cc:* ZFS filesystem discussion list
*Subject:* Re: [zfs-discuss] RAIDZ2 configuration

 


Hmm...it might be completely wrong , but the idea of raidz2 vdev with
3 disks came from the reading of
http://docs.sun.com/app/docs/doc/819-5461/gcvjg?a=view .

This particular page has the following example :

*zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0*

# *zpool status -v tank*

  pool: tank

 state: ONLINE

 scrub: none requested

config:

 


NAME  STATE READ WRITE CKSUM

tank  ONLINE   0 0 0

  raidz2  ONLINE   0 0 0

c1t0d0ONLINE   0 0 0

c2t0d0ONLINE   0 0 0

c3t0d0ONLINE   0 0 0

 


So...what am i missing here? Just a bad example in the sun
documentation regarding zfs?

Bruno

On 25-3-2010 20:10, Freddie Cash wrote:

On Thu, Mar 25, 2010 at 11:47 AM, Bruno Sousa mailto:bso...@epinfante.com>> wrote:

What do you mean by "Using fewer than 4 disks in a raidz2 defeats the
purpose of raidz2, as you will always be in a degraded mode" ? Does
it means that having 2 vdevs with 3 disks it won't be redundant in
the advent of a drive failure?

 


raidz1 is similar to raid5 in that it is single-parity, and requires
a minimum of 3 drives (2 data + 1 parity)

raidz2 is similar to raid6 in that it is double-parity, and requires
a minimum of 4 drives (2 data + 2 parity)

 


IOW, a raidz2 vdev made up of 3 drives will always be running in
degraded mode (it's missing a drive).

 


--

Freddie Cash
fjwc...@gmail.com <mailto:fjwc...@gmail.com>

--
This message has been scanned for viruses and
dangerous content by *MailScanner* <http://www.mailscanner.info/>,
and is
believed to be clean.

 

 


___

zfs-discuss mailing list

zfs-discuss@opensolaris.org <mailto:zfs-discuss@opensolaris.org>

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 
 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem importing a pool consisting of mkfile elements

2010-04-01 Thread Cindy Swearingen

Hi Marlanne,

I can import a pool that is created with files on a system running the
Solaris 10 10/09 release. See the output below.

This could be a regression from a previous Solaris release, although I
can't reproduce it, but creating a pool with files is not a recommended
practice as described in the zpool.1m man page.

You are correct also in that you cannot import a pool that is created
with files on another system. I don't think this operation was ever
intended to be used on a pool that is created with files.

Thanks,

Cindy

# zpool create pool mirror /files/file.1 /files/file.2
# zpool export pool
# zpool import -d /files
  pool: pool
id: 15183682163901756622
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

pool   ONLINE
  mirror-0 ONLINE
/files/file.1  ONLINE
/files/file.2  ONLINE
# zpool import -d /files pool
# zpool status pool
  pool: pool
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
pool   ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
/files/file.1  ONLINE   0 0 0
/files/file.2  ONLINE   0 0 0

errors: No known data errors

On 03/28/10 14:39, Marlanne DeLaSource wrote:

I would not like to be classified as a spammer. But can anyone help me about 
this ?

If mkfile-created pools cannot be re-imported, it may be a concern to some 
customers.
Thanks for a reply.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is this pool recoverable?

2010-04-03 Thread Cindy Swearingen

Patrick,

I'm happy that you were able to recover your pool.

Your original zpool status says that this pool was last accessed on
another system, which I believe is what caused of the pool to fail,
particularly if it was accessed simultaneously from two systems.

It is important that the cause of the original pool failure is
identified to prevent it from happening again.

This rewind pool recovery is a last-ditch effort and might not recover
all broken pools.

Thanks,

Cindy

On 04/02/10 12:32, Patrick Tiquet wrote:
Thanks, that worked!! 

It needed "-Ff" 


The pool has been recovered with minimal loss in data.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression property not received

2010-04-07 Thread Cindy Swearingen

Daniel,

Which Solaris release is this?

I can't reproduce this on my lab system that runs the Solaris 10 10/09 
release.


See the output below.

Thanks,

Cindy

# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs receive -vd rpool
receiving full stream of tank/t...@now into rpool/t...@now
received 249KB stream in 2 seconds (125KB/sec)
# zfs list -r rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  39.4G  27.5G  47.1M  /rpool
rpool/ROOT 4.89G  27.5G21K  legacy
rpool/ROOT/s10s_u8wos_08a  4.89G  27.5G  4.89G  /
rpool/dump 1.50G  27.5G  1.50G  -
rpool/export 44K  27.5G23K  /export
rpool/export/home21K  27.5G21K  /export/home
rpool/snaps31.0G  27.5G  31.0G  /rpool/snaps
rpool/swap2G  29.5G16K  -
rpool/test   21K  27.5G21K  /rpool/test
rpool/t...@now 0  -21K  -
# zfs get compression rpool/test
NAMEPROPERTY VALUE SOURCE
rpool/test  compression  gzip  local

On 04/07/10 11:47, Daniel Bakken wrote:
When I send a filesystem with compression=gzip to another server with 
compression=on, compression=gzip is not set on the received filesystem. 
I am using:


zfs send -R promise1/arch...@daily.1 | zfs receive -vd sas

The zfs manpage says regarding the -R flag: "When received, all 
properties, snapshots, descendent file systems, and clones are 
preserved." Snapshots are preserved, but the compression property is 
not. Any ideas why this doesn't work as advertised?


Thanks,

Daniel Bakken




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression property not received

2010-04-07 Thread Cindy Swearingen

Hi Daniel,

I tried to reproduce this by sending from a b130 system to a s10u9 
system, which vary in pool versions, but this shouldn't matter. I've

been sending/receiving streams between latest build systems and
older s10 systems for a long time. The zfs send -R option to send a
recursive snapshot and all properties integrated into b77 so that
isn't your problem either.

The above works as expected. See below.

I also couldn't find any recent bugs related to this, but bug searching 
is not an exact science.


Mystified as well...

Cindy

v440-brm-02# zfs create -o compression=gzip rpool/test
v440-brm-02# zfs snapshot rpool/t...@now
v440-brm-02# zfs send -Rv rpool/t...@now | ssh t2k-brm-03 zfs receive 
-dv tank

sending from @ to rpool/t...@now
.
.
.
Password:
receiving full stream of rpool/t...@now into tank/t...@now
received 8KB stream in 1 seconds (8KB/sec)
t2k-brm-03# zfs get compression tank/test
NAME   PROPERTY VALUE SOURCE
tank/test  compression  gzip  local


On 04/07/10 12:05, Daniel Bakken wrote:

Cindy,

The source server is OpenSolaris build 129 (zpool version 22) and the 
destination is stock OpenSolaris 2009.06 (zpool version 14). Both 
filesystems are zfs version 3.


Mystified,

Daniel Bakken

On Wed, Apr 7, 2010 at 10:57 AM, Cindy Swearingen 
mailto:cindy.swearin...@oracle.com>> wrote:


Daniel,

Which Solaris release is this?

I can't reproduce this on my lab system that runs the Solaris 10
10/09 release.

See the output below.

Thanks,

Cindy

# zfs destroy -r tank/test
# zfs create -o compression=gzip tank/test
# zfs snapshot tank/t...@now
# zfs send -R tank/t...@now | zfs receive -vd rpool
receiving full stream of tank/t...@now into rpool/t...@now
received 249KB stream in 2 seconds (125KB/sec)
# zfs list -r rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  39.4G  27.5G  47.1M  /rpool
rpool/ROOT 4.89G  27.5G21K  legacy
rpool/ROOT/s10s_u8wos_08a  4.89G  27.5G  4.89G  /
rpool/dump 1.50G  27.5G  1.50G  -
rpool/export 44K  27.5G23K  /export
rpool/export/home21K  27.5G21K  /export/home
rpool/snaps31.0G  27.5G  31.0G  /rpool/snaps
rpool/swap2G  29.5G16K  -
rpool/test   21K  27.5G21K  /rpool/test
rpool/t...@now 0  -21K  -
# zfs get compression rpool/test
NAMEPROPERTY VALUE SOURCE
rpool/test  compression  gzip  local


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression property not received

2010-04-08 Thread Cindy Swearingen

Hi Daniel,

D'oh...

I found a related bug when I looked at this yesterday but I didn't think
it was your problem because you didn't get a busy message.

See this RFE:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6700597


Cindy
On 04/07/10 17:59, Daniel Bakken wrote:
We have found the problem. The mountpoint property on the sender was at 
one time changed from the default, then later changed back to defaults 
using zfs set instead of zfs inherit. Therefore, zfs send included these 
local "non-default" properties in the stream, even though the local 
properties are effectively set at defaults. This caused the receiver to 
stop processing subsequent properties in the stream because the 
mountpoint isn't valid on the receiver.


I tested this theory with a spare zpool. First I used "zfs inherit 
mountpoint promise1/archive" to remove the "local" setting (which was 
exactly the same value as the default). This time the compression=gzip 
property was correctly received.


It seems like a bug to me that one failed property in a stream prevents 
the rest from being applied. I should have used zfs inherit, but it 
would be best if zfs receive handled failures more gracefully, and 
attempted to set as many properties as possible.


Thanks to Cindy and Tom for their help.

Daniel

On Wed, Apr 7, 2010 at 2:31 AM, Tom Erickson > wrote:



Now I remember that 'zfs receive' used to give up after the first
property it failed to set. If I'm remembering correctly, then, in
this case, if the mountpoint was invalid on the receive side, 'zfs
receive' would not even try to set the remaining properties.

I'd try the following in the source dataset:

zfs inherit mountpoint promise1/archive

to clear the explicit mountpoint and prevent it from being included
in the send stream. Later set it back the way it was. (Soon there
will be an option to take care of that; see CR 6883722 want 'zfs
recv -o prop=value' to set initial property values of received
dataset.) Then see if you receive the compression property successfully.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Cindy Swearingen

Jonathan,

For a different diagnostic perspective, you might use the fmdump -eV
command to identify what FMA indicates for this device. This level of
diagnostics is below the ZFS level and definitely more detailed so
you can see when these errors began and for how long.

Cindy

On 04/14/10 11:08, Jonathan wrote:
Yeah, 
--

$smartctl -d sat,12 -i /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)
--

I'm thinking between 111 and 132 (mentioned in post) something changed.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot set property for 'rpool': property 'bootfs' not supported on EFI labeled devices

2010-04-16 Thread Cindy Swearingen

Hi Tony,

Is this on an x86 system?

If so, you might also check whether this disk has a Solaris fdisk
partition or has an EFI fdisk partition.

If it has an EFI fdisk partition then you'll need to change it to a
Solaris fdisk partition.

See the pointers below.

Thanks,

Cindy

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

# fdisk /dev/rdsk/c1t1d0p0
selecting c1t1d0p0
  Total disk size is 8924 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 EFI   0  89248925100
.
.
.
Enter Selection:
On 04/16/10 11:13, Tony MacDoodle wrote:
I am getting the following error, however as you can see below this is a 
SMI label...


cannot set property for 'rpool': property 'bootfs' not supported on EFI 
labeled devices



# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool
cannot set property for 'rpool': property 'bootfs' not supported on EFI 
labeled devices



partition> pri
Current partition table (original):
Total disk cylinders available: 1989 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 1988 69.93GB (1989/0/0) 146644992
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 - 1988 69.93GB (1989/0/0) 146644992
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0


Any ideas as to why?

Thanks




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirror

2010-04-16 Thread Cindy Swearingen

MstAsg,

Is this the root pool disk?

I'm not sure I'm following what you want to do but I think you want
to attach a disk to create a mirrored configuration, then detach
the original disk.

If this is a ZFS root pool that contains the Solaris OS, then
following these steps:

1. Attach disk-2.

# zpool attach rpool disk-1 disk-2

2. Use the zpool status command to make sure the disk has resilvered
completely.

3. Apply the bootblocks to disk-2:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/disk-2

4. Test that you can boot from disk-2.

5. Detach disk-1.

# zpool detach rpool disk-1

If this isn't the root pool, then you can skip steps 2-4.

Thanks,

Cindy

On 04/16/10 15:34, MstAsg wrote:

I have a question. I have a disk that solaris 10 & zfs is installed. I wanted 
to add the other disks and replace this with the other. (totally three others). If 
I do this, I add some other disks, would the data be written immediately? Or only 
the new data is mirrored? Or I should use snapshots to replace that?


Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirror

2010-04-16 Thread Cindy Swearingen

If this isn't a root pool disk, then skip steps 3-4. Letting
the replacement disk resilver before removing the original
disk is good advice for any configuration.

cs

On 04/16/10 16:15, Cindy Swearingen wrote:

MstAsg,

Is this the root pool disk?

I'm not sure I'm following what you want to do but I think you want
to attach a disk to create a mirrored configuration, then detach
the original disk.

If this is a ZFS root pool that contains the Solaris OS, then
following these steps:

1. Attach disk-2.

# zpool attach rpool disk-1 disk-2

2. Use the zpool status command to make sure the disk has resilvered
completely.

3. Apply the bootblocks to disk-2:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/disk-2

4. Test that you can boot from disk-2.

5. Detach disk-1.

# zpool detach rpool disk-1

If this isn't the root pool, then you can skip steps 2-4.

Thanks,

Cindy

On 04/16/10 15:34, MstAsg wrote:
I have a question. I have a disk that solaris 10 & zfs is installed. I 
wanted to add the other disks and replace this with the other. 
(totally three others). If I do this, I add some other disks, would 
the data be written immediately? Or only the new data is mirrored? Or 
I should use snapshots to replace that?



Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-19 Thread Cindy Swearingen

Hi Brandon,

I think I've done a similar migration before by creating a second root
pool, and then create a new BE in the new root pool, like this:

# zpool create rpool2 mirror disk-1 disk2
# lucreate -n newzfsBE -p rpool2
# luactivate newzfsBE
# installgrub ...


I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.

Thanks,

Cindy


On 04/16/10 17:41, Brandon High wrote:

When I set up my opensolaris system at home, I just grabbed a 160 GB
drive that I had sitting around to use for the rpool.

Now I'm thinking of moving the rpool to another disk, probably ssd,
and I don't really want to shell out the money for two 160 GB drives.
I'm currently using ~ 18GB in the rpool, so any of the ssd 'boot
drives' being sold are large enough. I know I can't attach a device
that much smaller to the rpool however.

Would it be possible to do the following?
1. Attach the new drives.
2. Reboot from LiveCD.
3. zpool create new_rpool on the ssd
4. zfs send all datasets from rpool to new_rpool
5. installgrub /boot/grub/stage1 /boot/grub/stage2 on the ssd
6. zfs export the rpool and new_rpool
7. 'zfs import new_rpool rpool' (This should rename it to rpool, right?)
8. shutdown and disconnect the old rpool drive

This should work, right? I plan to test it on a VirtualBox instance
first, but does anyone see a problem with the general steps I've laid
out?

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large size variations - what is canonical method

2010-04-19 Thread Cindy Swearingen

Hi Harry,

Both du and df are pre-ZFS commands and don't really understand ZFS
space issues, which are described in the ZFS FAQ here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq

Why does du(1) report different file sizes for ZFS and UFS? Why doesn't
the space consumption that is reported by the df command and the zfs
list command match?

Will's advice is good:

Use zpool list and zfs list to determine how much space is available for 
your ZFS file systems and use du or ls -l to review file sizes. Don't

use du or df to look at ZFS file systems sizes.

Thanks,

Cindy

On 04/19/10 16:18, Harry Putnam wrote:

Will Murnane  writes:


It's important to consider what you want this data for.  Considering
upgrading your storage to get more room?  Check out "zpool list".
Need to know whether accounting or engineering is using more space?
Look at "zfs list".  Looking at a sparse or compressed file, and want
to know how many bytes are allocated to it?  "du" does the trick.
Planning to email someone a file, and want to know if it'll fit in
their 10MB quota?  "ls -l" is the relevant command.

In short, there are many commands because there are many answers, and
many questions.  No single tool has all the information available to
it.


I'm still confused, sorry I thought I got it for a minute there.

I'm seeing a really big (to big to be excused lightly) difference with
the 2 zfs native methods  zpool and rpool
compared to 2 native unix methods, du and /bin/df

I'm seeing 100s of GB in use, that /bin/df... doesn't know about, and
du cannot find.

Something isn't adding up here:

  zpool list

  NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
  rpool   466G  50.7G   415G10%  1.00x  ONLINE  -
  z2  464G   366G  97.9G78%  1.00x  ONLINE  -
  z3  696G   124G   572G17%  1.00x  ONLINE  -
 
You see z2 shows 366G allocated and 97.9 free


  zfs list -r z2

  NAMEUSED  AVAIL  REFER  MOUNTPOINT
  z2  366G  90.7G19K  /z2
  z2/pub 4.98G  90.7G  4.97G  /pub
  z2/rhosts   326G  90.7G22K  /rhosts
  z2/rhosts/imgs  299G  90.7G23K  /rhosts/imgs
  z2/rhosts/imgs/bjp 83.6G  90.7G  28.4G  /rhosts/imgs/bjp
  z2/rhosts/imgs/harvey   150G  90.7G  17.6G  /rhosts/imgs/harvey
  z2/rhosts/imgs/harvey/ImagesMusic   132G  90.7G  28.3G  
/rhosts/imgs/harvey/ImagesMusic
  z2/rhosts/imgs/mob165.9G  90.7G  27.3G  /rhosts/imgs/mob1
  z2/rhosts/misc 27.1G  90.7G23K  /rhosts/misc
  z2/rhosts/misc/bjp   20K  90.7G20K  /rhosts/misc/bjp
  z2/rhosts/misc/harvey  27.1G  90.7G  27.1G  /rhosts/misc/harvey
  z2/rhosts/misc/mob1  20K  90.7G20K  /rhosts/misc/mob1
  z2/win 34.8G  90.7G  34.6G  /win

You see  366G Used and 90.7G Available

  /bin/df -h 


  z2/pub 457G   5.0G91G 6%/pub
  z2/rhosts  457G22K91G 1%/rhosts
  z2/rhosts/imgs 457G23K91G 1%/rhosts/imgs
  z2/rhosts/imgs/bjp 457G28G91G24%/rhosts/imgs/bjp
  z2/rhosts/imgs/harvey   457G18G91G17%/rhosts/imgs/harvey
  z2/rhosts/imgs/harvey/ImagesMusic   457G28G91G24%
/rhosts/imgs/harvey/ImagesMusic
  z2/rhosts/imgs/mob1457G27G91G24%/rhosts/imgs/mob1
  z2/rhosts/misc 457G23K91G 1%/rhosts/misc
  z2/rhosts/misc/bjp 457G20K91G 1%/rhosts/misc/bjp
  z2/rhosts/misc/harvey   457G27G91G24%/rhosts/misc/harvey
  z2/rhosts/misc/mob1457G20K91G 1%/rhosts/misc/mob1
  z2/win 457G35G91G28%/win
  z2 457G19K91G 1%/z2

Looks similar, but look again.  If you add up the `used' column
(column 3) it only adds up to - 163 GB 127 KB

  du -sh /pub /win /rhosts
  5.0G/pub
  35G /win
  129G/rhosts
=
  169G

du almost agrees with /bin/df

But both du and /bin/df are around 200G smaller than either zpool or
zfs list.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Cindy Swearingen

Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solaris 10 system.

I wonder if beadm supports a similar migration. I will find out
and let you know.

Thanks,

Cindy

On 04/19/10 17:22, Brandon High wrote:

On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen
 wrote:

I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.


It doesn't look like OpenSolaris includes LU, at least on x86-64.
Anyhow, wouldn't the method you mention fail because zfs would use the
wront partition table for booting?

basestar:~$ lucreate
-bash: lucreate: command not found
bh...@basestar:~$ man lucreate
No manual entry for lucreate.
bh...@basestar:~$ pkgsearch lucreate
-bash: pkgsearch: command not found
bh...@basestar:~$ pkg search lucreate
bh...@basestar:~$ pkg search SUNWluu
bh...@basestar:~$

I think I remember someone posting a method to copy the boot drive's
layout with prtvtoc and fmthard, but I don't remember the exact
syntax.

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Making an rpool smaller?

2010-04-20 Thread Cindy Swearingen

Brandon,

You can use the OpenSolaris beadm command to migrate a ZFS BE over
to another root pool, but you will also need to perform some manual
migration steps, such as
- copy over your other rpool datasets
- recreate swap and dump devices
- install bootblocks
- update BIOS and GRUB entries to boot from new root pool

The BE recreation gets you part of the way and its fast, anyway.

Thanks,

Cindy

!. Create the second root pool.

# zpool create rpool2 c5t1d0s0

2. Create the new BE in the second root pool.

# beadm create -p rpool2 osol2BE

3. Activate the new BE.

# beadm activate

4. Install the boot blocks.

5. Test that the system boots from the second root pool.

6. Update BIOS and GRUB to boot from new pool.

On 04/20/10 08:36, Cindy Swearingen wrote:

Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solaris 10 system.

I wonder if beadm supports a similar migration. I will find out
and let you know.

Thanks,

Cindy

On 04/19/10 17:22, Brandon High wrote:

On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen
 wrote:

I don't think LU cares that the disks in the new pool are smaller,
obviously they need to be large enough to contain the BE.


It doesn't look like OpenSolaris includes LU, at least on x86-64.
Anyhow, wouldn't the method you mention fail because zfs would use the
wront partition table for booting?

basestar:~$ lucreate
-bash: lucreate: command not found
bh...@basestar:~$ man lucreate
No manual entry for lucreate.
bh...@basestar:~$ pkgsearch lucreate
-bash: pkgsearch: command not found
bh...@basestar:~$ pkg search lucreate
bh...@basestar:~$ pkg search SUNWluu
bh...@basestar:~$

I think I remember someone posting a method to copy the boot drive's
layout with prtvtoc and fmthard, but I don't remember the exact
syntax.

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Identifying what zpools are exported

2010-04-21 Thread Cindy Swearingen

Hi Justin,

Maybe I misunderstand your question...

When you export a pool, it becomes available for import by using
the zpool import command. For example:

1. Export tank:

# zpool export tank

2. What pools are available for import:

# zpool import
  pool: tank
id: 7238661365053190141
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, 
though
some features will not be available without an explicit 'zpool 
upgrade'.

config:

tankONLINE
  c1t2d0ONLINE




On 04/21/10 12:59, Justin Lee Ewing wrote:
So I can obviously see what zpools I have imported... but how do I see 
pools that have been exported?  Kind of like being able to see deported 
volumes using "vxdisk -o alldgs list".


Justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP! zpool corrupted data

2010-04-22 Thread Cindy Swearingen

Hi Clint,

Your symptoms to point to disk label problems, dangling device links,
or overlapping partitions. All could be related to the power failure.

The OpenSolaris error message (b134, I think you mean) brings up these
bugs:

6912251, describes the dangling links problem, which you might be able
to clear up with devfsadm -C
6904358, describes this error due overlapping partitions and points to 
this CR:


http://defect.opensolaris.org/bz/show_bug.cgi?id=13331

If none of the above issues match, review what ZFS thinks the disk
labels are and what the disk labels actually are post-power failure.

On the OpenSolaris side, you can use zdb command to review the disk
label information. For example, I have a pool on c5t1d0 so to review the 
pool's idea of the disk labels, I use this command to review whether

the disk labels are coherent.

# zdb -l /dev/rdsk/c5t1d0s0

Review the above bug info and if it turns out that the disk labels
need to be recreated, maybe someone who had done this task can help.

Thanks,

Cindy

On 04/21/10 16:23, Clint wrote:

Hello,

Due to a power outage our file server running FreeBSD 8.0p2 will no longer come 
up due to zpool corruption.  I get the following output when trying to import 
the ZFS pool using either a FreeBSD 8.0p2 cd or the latest OpenSolaris snv_143 
cd:

FreeBSD mfsbsd 8.0-RELEASE-p2.vx.sk:/usr/obj/usr/src/sys/GENERIC  amd64
mfsbsd# zpool import
  pool: tank
id: 1998957762692994918
 state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

tankFAULTED  corrupted data
  raidz1ONLINE
gptid/e895b5d6-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/e96cf4a2-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/ea4a127c-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/eb3160a6-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/ec02f050-4bab-11df-8a83-0019d159e82b  ONLINE
gptid/ecdb408b-4bab-11df-8a83-0019d159e82b  ONLINE

mfsbsd# zpool import -f tank
internal error: Illegal byte sequence
Abort (core dumped)


SunOS opensolaris 5.11 snv_134 i86pc i386 i86pc Solaris

r...@opensolaris:/# zpool import -nfFX -R /mnt tank
Assertion failed: rn->rn_nozpool == B_FALSE, file ../common/libzfs_import.c, 
line 1078, function zpool_open_func
Abort (core dumped)


I don't really need to get the server bootable again, but I do need to get the 
data off of one of the file systems. Any help would be greatly appreciated.

Thanks,
Clint

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to delegate zfs snapshot destroy to users?

2010-04-26 Thread Cindy Swearingen

Hi Vlad,

The create-time permissions do not provide the correct permissions for
destroying descendent datasets, such as clones.

See example 9-5 in this section that describes how to use zfs allow -d
option to grant permissions on descendent datasets:

http://docs.sun.com/app/docs/doc/819-5461/gebxb?l=en&a=view

Example 9–5 Delegating Permissions at the Correct File System Level

Delegating or granting the appropriate permissions will take some
testing on the part of the administrator who is granting the
permissions. I hope the examples help.

Thanks,

Cindy



On 04/26/10 05:28, Vladimir Marek wrote:

Hi,

I'm trying to let zfs users to create and destroy snapshots in their zfs
filesystems.

So rpool/vm has the permissions:

osol137 19:07 ~: zfs allow rpool/vm
 Permissions on rpool/vm -
Permission sets:
@virtual 
clone,create,destroy,mount,promote,readonly,receive,rename,rollback,send,share,snapshot,userprop
Create time permissions:
@virtual
Local permissions:
group staff create,mount


now as regular user I do:

$ zfs create rpool/vm/vm156888
$ zfs create rpool/vm/vm156888/a
$ zfs snapshot rpool/vm/vm156888/a...@1
$ zfs destroy rpool/vm/vm156888/a...@1
cannot destroy 'rpool/vm/vm156888/a...@1': permission denied


The only way around I found is to add 'allow' right to the @virtual
group

sudo zfs allow -s @virtual allow rpool/vm

Now as regular user I can:

zfs allow vm156888 mount,destroy rpool/vm/vm156888/a
zfs destroy rpool/vm/vm156888/a...@1

I believe that I need to do this because the "Create time" permissions
are used only as "Local permissions" on new filesystem, while for
deleting snapshot I need them as Local+Descendent.


So user if he wants to use snapshots, he has to know to grant himself
mount+delete permissions first. Is this the intended way to go?

Thank you

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Expand zpool capacity

2010-04-26 Thread Cindy Swearingen

Yes, it is helpful in that it reviews all the steps needed to get the
replacement disk labeled properly for a root pool and is identical
to what we provide in the ZFS docs.

The part that is not quite accurate is the reasons for having to relabel 
the replacement disk with the format utility.


If the replacement disk had an identical slice 0 (same size or greater)
with an SMI label then no need exists to relabel the disk. In this case,
he could have just attached the replacement disk, installed the boot
blocks, tested booting from the replacement disk, and detached the older 
disk.


If replacement disk had an EFI label or no slice 0, or a slice 0 that
is too small, then yes, you have to perform the format steps as
described in this video.

Thanks,

Cindy
On 04/26/10 08:24, Vladimir L. wrote:

It's a litle while ago, but i've found a http://www.youtube.com/watch?v=tpzsSptzmyA";>pretty helpful video on YT how to 
completely "migrate" from one harddrive to another.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare in use althought disk is healthy ?

2010-04-26 Thread Cindy Swearingen

Hi Lutz,

You can try the following commands to see what happened:

1. Someone else replaced the disk with a spare, which would be
recorded in this command:

# zpool history -l zfs01vol

2. If the disk had some transient outage then maybe the spare kicked
in. Use the following command to see if something happened to this
disk:

# fmdump -eV

This command might produce a lot of output, but look for c3t12d0
occurrences.

3. If the c3t12d0 disk is okay, try detaching the spare back to the
spare pool like this:

# zpool detach zfs01vol c3t21d0

Thanks,

Cindy

On 04/26/10 15:41, Lutz Schumann wrote:
Hello list, 

a pool shows some strange status: 


volume: zfs01vol
 state: ONLINE
 scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38
2010
config:

NAME   STATE READ WRITE CKSUM
zfs01vol   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t4d0 ONLINE   0 0 0
c3t4d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t5d0 ONLINE   0 0 0
c3t5d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t8d0 ONLINE   0 0 0
c3t8d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t9d0 ONLINE   0 0 0
c3t9d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t12d0ONLINE   0 0 0
spare  ONLINE   0 0 0
  c3t12d0  ONLINE   0 0 0
  c3t21d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t13d0ONLINE   0 0 0
c3t13d0ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t16d0ONLINE   0 0 0
c3t16d0ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t17d0ONLINE   0 0 0
c3t17d0ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t20d0ONLINE   0 0 0
c3t20d0ONLINE   0 0 0
logs   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t0d0 ONLINE   0 0 0
c3t0d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c2t1d0 ONLINE   0 0 0
c3t1d0 ONLINE   0 0 0
cache
  c0t0d0   ONLINE   0 0 0
  c0t1d0   ONLINE   0 0 0
  c0t2d0   ONLINE   0 0 0
spares
  c2t21d0  AVAIL
  c3t21d0  INUSE currently in use

The spare is in use, altought there is no failed disk in the pool.

Can anyone "interpret" this ? Is this a bug ? 

Thanks, 
Robert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS version information changes (heads up)

2010-04-27 Thread Cindy Swearingen

Hi everyone,

Please review the information below regarding access to ZFS version
information.

Let me know if you have questions.

Thanks,

Cindy

CR 6898657:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6898657

ZFS commands zpool upgrade -v and zfs upgrade -v refer to URLs that
are no longer redirected to the correct location after April 30, 2010.

Description

The opensolaris.org site has moved to hub.opensolaris.org and the
opensolaris.org site is no longer redirected to the new site after
April 30, 2010.

The zpool upgrade and zfs upgrade commands in the Solaris 10 releases
and the OpenSolaris release refer to opensolaris.org URLs that no
longer exist. For example:

# zpool upgrade -v
.
.
.

For more information on a particular version, including supported
releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

# zfs upgrade -v
.
.
.
For more information on a particular version, including supported
releases, see:

http://www.opensolaris.org/os/community/zfs/version/zpl/N


Workaround

Access either of the replacement URLs as follows.

1. For zpool upgrade -v, use this URL:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/N


2. For zfs upgrade -v, use this URL:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/N-1

Resolution

CR 6898657 identifies the replacement hub.opensolaris.org URLs and
the longer term fix, which is that the zfs upgrade and zpool upgrade
commands will provide the following new text:

For more information on a particular version, including supported
releases, see the ZFS Administration Guide.

The revised ZFS Administration Guide describes the ZFS version
descriptions and the Solaris OS releases that provide the version
and feature, starting on page 293, here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-27 Thread Cindy Swearingen

Hi Wolf,

Which Solaris release is this?

If it is an OpenSolaris system running a recent build, you might
consider the zpool split feature, which splits a mirrored pool into two
separate pools, while the original pool is online.

If possible, attach the spare disks to create the mirrored pool as
a first step.

See the example below.

Thanks,

Cindy

You can attach the spare disks to the existing pool to create the
mirrored pool:

# zpool attach tank disk-1 spare-disk-1
# zpool attach tank disk-2 spare-disk-2

Which gives you a pool like this:

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Tue Apr 27 
14:36:28 2010

config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  mirror-0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
c2t5d0   ONLINE   0 0 0
  mirror-1   ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0
c2t6d0   ONLINE   0 0 0  56.5K resilvered

errors: No known data errors


Then, split the mirrored pool, like this:

# zpool split tank tank2
# zpool import tank2
# zpool status tank tank2
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Tue Apr 27 
14:36:28 2010

config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  c2t9d0ONLINE   0 0 0
  c2t10d0   ONLINE   0 0 0

errors: No known data errors

  pool: tank2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tank2   ONLINE   0 0 0
  c2t5d0ONLINE   0 0 0
  c2t6d0ONLINE   0 0 0



On 04/27/10 15:06, Wolfraider wrote:

We would like to delete and recreate our existing zfs pool without losing any 
data. The way we though we could do this was attach a few HDDs and create a new 
temporary pool, migrate our existing zfs volume to the new pool, delete and 
recreate the old pool and migrate the zfs volumes back. The big problem we have 
is we need to do all this live, without any downtime. We have 2 volumes taking 
up around 11TB and they are shared out to a couple windows servers with 
comstar. Anyone have any good ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS version information changes (heads up)

2010-04-27 Thread Cindy Swearingen

The OSOL ZFS Admin Guide PDF is pretty stable, even if the
page number isn't, but I wanted to provide an interim solution.

When this information is available on docs.sun.com very soon now,
the URL will be stable.

cs


On 04/27/10 15:32, Daniel Carosone wrote:

On Tue, Apr 27, 2010 at 11:29:04AM -0600, Cindy Swearingen wrote:

The revised ZFS Administration Guide describes the ZFS version
descriptions and the Solaris OS releases that provide the version
and feature, starting on page 293, here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs


It's not entirely clear how much of the text above you're quoting 
as the addition, but surely referring to a page number is even more 
volatile than a url?


--
Dan.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS & Mysql

2010-04-28 Thread Cindy Swearingen

Hi Abdullah,

You can review the ZFS/MySQL presentation at this site:

http://forge.mysql.com/wiki/MySQL_and_ZFS#MySQL_and_ZFS

We also provide some ZFS/MySQL tuning info on our wiki,
here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsanddatabases

Thanks,

Cindy


On 04/28/10 03:42, Abdullah Al-Dahlawi wrote:

Greeing All

This might be an old question !!

Does any one know how to use ZFS with Mysql, i.e how to make mysql use a 
ZFS file system ,  how to point zfs to tank/myzfs ???



Thanks



--
Abdullah Al-Dahlawi
George Washington University
Department. Of Electrical & Computer Engineering

Check The Fastest 500 Super Computers Worldwide
http://www.top500.org/list/2009/11/100




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, NFS, and ACLs ssues

2010-04-28 Thread Cindy Swearingen

Hi Mary Ellen,

We were looking at this problem and are unsure what the problem is...

To rule out NFS as the root cause, could you create and share a test ZFS 
file system without any ACLs to see if you can access the data from the

Linux client?

Let us know the result of your test.

Thanks,

Cindy
On 04/28/10 12:54, Mary Ellen Fitzpatrick wrote:
New to Solairs/ZFS and having a difficult time getting ZFS, NFS and ACLs 
all working together, properly.   Trying access/use zfs shared 
filesystems on a linux client. When I access the dir/files on the linux 
client, my permissions do not carry over, nor do the newly created 
files, and I can not create new files/dirs.   The permissions/owner on 
the zfs share are set so the owner (mfitzpat) is allowed to do 
everything, but permissions are not carrying over via NFS to the linux 
client.I have googled/read and can not get it right.   I think this 
has something to do with NSF4, but I can not figure it out.


Any help appreciated
Mary Ellen

Running Solaris10 5/09 (u7) on a SunFire x4540 (hecate) with ZFS and zfs 
shares automounted to Centos5 client (nona-man).
Running NIS on nona-man(Centos5) and hecate (zfs) is a client.  All 
works well.


I have created the following zfs filesystems to share and have sharenfs=on
hecate:/zp-ext/spartans/umass> zfs get sharenfs
zp-ext/spartans/umass   sharenfs  oninherited from 
zp-ext/spartans
zp-ext/spartans/umass/mfitzpat  sharenfs  oninherited from 
zp-ext/spartans


set up inheritance:
hecate:/zp-ext/spartans/umass> zfs set aclinherit=passthrough 
zp-ext/spartans/umass
hecate:/zp-ext/spartans/umass> zfs set aclinherit=passthrough 
zp-ext/spartans/umass/mfitzpat
hecate:/zp-ext/spartans/umass> zfs set aclmode=passthrough 
zp-ext/spartans/umass
hecate:/zp-ext/spartans/umass> zfs set aclmode=passthrough 
zp-ext/spartans/umass/mfitzpat


Set owner:group:
hecate:/zp-ext/spartans/umass> chown mfitzpat:umass mfitzpat
hecate:/zp-ext/spartans/umass> ls -l
total 5
drwxr-xr-x   2 mfitzpat umass  2 Apr 28 13:18 mfitzpat

Permissions:
hecate:/zp-ext/spartans/umass> ls -dv mfitzpat
drwxr-xr-x   2 mfitzpat umass  2 Apr 28 14:06 mfitzpat
0:owner@::deny
1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/execute/write_attributes/write_acl
/write_owner:allow
2:group@:add_file/write_data/add_subdirectory/append_data:deny
3:group@:list_directory/read_data/execute:allow

4:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr

/write_attributes/write_acl/write_owner:deny
5:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
/read_acl/synchronize:allow

I can access, create/delete files/dirs on the zfs system and permissions 
hold.

[mfitz...@hecate mfitzpat]$ touch foo
[mfitz...@hecate mfitzpat]$ ls -l
total 1
-rw-r--r--   1 mfitzpat umass  0 Apr 28 14:18 foo

When I try to access the dir/files on the linux client, my permissions 
do no carry over, nor do the newly created files, and I can not create 
new files/dirs.

[mfitz...@nona-man umass]$ ls -l
drwxr-xr-x+ 2 root root 2 Apr 28 13:18 mfitzpat

[mfitz...@nona-man mfitzpat]$ pwd
/fs/umass/mfitzpat
[mfitz...@nona-man mfitzpat]$ ls
[mfitz...@nona-man mfitzpat]$



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-04-29 Thread Cindy Swearingen

Hi Euan,

For full root pool recovery see the ZFS Administration Guide, here:

http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view

Recovering the ZFS Root Pool or Root Pool Snapshots

Additional scenarios and details are provided in the ZFS troubleshooting
wiki. The link is here but the site is not responding at the moment:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Check back here later today.

Thanks,

Cindy

On 04/28/10 23:02, Euan Thoms wrote:

I'm looking for a way to backup my entire system, the rpool zfs pool to an 
external HDD so that it can be recovered in full if the internal HDD fails. 
Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which 
worked so well, I was very confident with it. Now ZFS doesn't have an exact 
replacement of this so I need to find a best practice to replace it.

I'm guessing that I can format the external HDD as a pool called 'backup' and "zfs 
send -R ... | zfs receive ..." to it. What I'm not sure about is how to restore. 
Back in the days of UFS, I would boot of the Solaris 10 CD in single user mode command 
prompt, partition HDD with correct slices, format it, mount it and ufsrestore the entire 
filesystem. With zfs, I don't know what I'm doing. Can I just make a pool called rpool 
and zfs send/receive it back?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, NFS, and ACLs ssues

2010-04-29 Thread Cindy Swearingen

Hi Mary Ellen,

I'm not really qualified to help you troubleshoot this problem.
Other community members on this list have wrestled with similar
problems and I hope they will comment...

Your Linux client doesn't seem to be suffering from the nobody
problem because you see mfitzpat on nona-man so UID/GIDs are
translated correctly.

This issue has come up often enough that I will start tracking
this in our troubleshooting wiki as soon as we get more feedback.

Thanks,

Cindy
On 04/29/10 09:23, Mary Ellen Fitzpatrick wrote:
I setup the share and mounted on linux client, permissions did not carry 
over from zfs share.



hecate:~> zfs create zp-ext/test/mfitzpat
hecate:/zp-ext/test> zfs get sharenfs zp-ext/test/mfitzpat
NAME  PROPERTY  VALUE SOURCE
zp-ext/test/mfitzpat  sharenfs  oninherited from zp-ext
hecate:/zp-ext/test> chown -R mfitzpat:umass mfitzpat

updated auto.home on linux client(nona-man)
test-rw,hard,intr   hecate:/zp-ext/test

nona-man:/# cd /fs/test
nona-man:/fs/test# ls -l
total 3
drwxr-xr-x+ 2 root root 2 Apr 29 11:15 mfitzpat

Permissions did not carry over from zfs share.
Willing test/try next step.

Mary  Ellen




Cindy Swearingen wrote:

Hi Mary Ellen,

We were looking at this problem and are unsure what the problem is...

To rule out NFS as the root cause, could you create and share a test 
ZFS file system without any ACLs to see if you can access the data 
from the

Linux client?

Let us know the result of your test.

Thanks,

Cindy
On 04/28/10 12:54, Mary Ellen Fitzpatrick wrote:
 
New to Solairs/ZFS and having a difficult time getting ZFS, NFS and 
ACLs all working together, properly.   Trying access/use zfs shared 
filesystems on a linux client. When I access the dir/files on the 
linux client, my permissions do not carry over, nor do the newly 
created files, and I can not create new files/dirs.   The 
permissions/owner on the zfs share are set so the owner (mfitzpat) is 
allowed to do everything, but permissions are not carrying over via 
NFS to the linux client.I have googled/read and can not get it 
right.   I think this has something to do with NSF4, but I can not 
figure it out.


Any help appreciated
Mary Ellen

Running Solaris10 5/09 (u7) on a SunFire x4540 (hecate) with ZFS and 
zfs shares automounted to Centos5 client (nona-man).
Running NIS on nona-man(Centos5) and hecate (zfs) is a client.  All 
works well.


I have created the following zfs filesystems to share and have 
sharenfs=on

hecate:/zp-ext/spartans/umass> zfs get sharenfs
zp-ext/spartans/umass   sharenfs  oninherited 
from zp-ext/spartans
zp-ext/spartans/umass/mfitzpat  sharenfs  oninherited 
from zp-ext/spartans


set up inheritance:
hecate:/zp-ext/spartans/umass> zfs set aclinherit=passthrough 
zp-ext/spartans/umass
hecate:/zp-ext/spartans/umass> zfs set aclinherit=passthrough 
zp-ext/spartans/umass/mfitzpat
hecate:/zp-ext/spartans/umass> zfs set aclmode=passthrough 
zp-ext/spartans/umass
hecate:/zp-ext/spartans/umass> zfs set aclmode=passthrough 
zp-ext/spartans/umass/mfitzpat


Set owner:group:
hecate:/zp-ext/spartans/umass> chown mfitzpat:umass mfitzpat
hecate:/zp-ext/spartans/umass> ls -l
total 5
drwxr-xr-x   2 mfitzpat umass  2 Apr 28 13:18 mfitzpat

Permissions:
hecate:/zp-ext/spartans/umass> ls -dv mfitzpat
drwxr-xr-x   2 mfitzpat umass  2 Apr 28 14:06 mfitzpat
0:owner@::deny

1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory

/append_data/write_xattr/execute/write_attributes/write_acl
/write_owner:allow
2:group@:add_file/write_data/add_subdirectory/append_data:deny
3:group@:list_directory/read_data/execute:allow

4:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr

/write_attributes/write_acl/write_owner:deny

5:everyone@:list_directory/read_data/read_xattr/execute/read_attributes

/read_acl/synchronize:allow

I can access, create/delete files/dirs on the zfs system and 
permissions hold.

[mfitz...@hecate mfitzpat]$ touch foo
[mfitz...@hecate mfitzpat]$ ls -l
total 1
-rw-r--r--   1 mfitzpat umass  0 Apr 28 14:18 foo

When I try to access the dir/files on the linux client, my 
permissions do no carry over, nor do the newly created files, and I 
can not create new files/dirs.

[mfitz...@nona-man umass]$ ls -l
drwxr-xr-x+ 2 root root 2 Apr 28 13:18 mfitzpat

[mfitz...@nona-man mfitzpat]$ pwd
/fs/umass/mfitzpat
[mfitz...@nona-man mfitzpat]$ ls
[mfitz...@nona-man mfitzpat]$






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-04-30 Thread Cindy Swearingen

Hi Ned,

Unless I misunderstand what bare metal recovery means, the following
procedure describes how to boot from CD, recreate the root pool, and
restore the root pool snapshots:

http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view

I retest this process at every Solaris release.

Thanks,

Cindy

On 04/29/10 21:42, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cindy Swearingen

For full root pool recovery see the ZFS Administration Guide, here:

http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view

Recovering the ZFS Root Pool or Root Pool Snapshots


Unless I misunderstand, I think the intent of the OP question is how to do
bare metal recovery after some catastrophic failure.  In this situation,
recovery is much more complex than what the ZFS Admin Guide says above.  You
would need to boot from CD, and partition and format the disk, then create a
pool, and create a filesystem, and "zfs send | zfs receive" into that
filesystem, and finally install the boot blocks.  Only some of these steps
are described in the ZFS Admin Guide, because simply expanding the rpool is
a fundamentally easier thing to do.

Even though I think I could do that ... I don't have a lot of confidence in
it, and I can certainly imagine some pesky little detail being a problem.

This is why I suggested the technique of:
Reinstall the OS just like you did when you first built your machine, before
the catastrophy.  It doesn't even matter if you make the same selections you
made before (IP address, package selection, authentication method, etc) as
long as you're choosing to partition and install the bootloader like you did
before.

This way, you're sure the partitions, format, pool, filesystem, and
bootloader are all configured properly.
Then boot from CD again, and "zfs send | zfs receive" to overwrite your
existing rpool.

And as far as I know, that will take care of everything.  But I only feel
like 90% confident that would work.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Panic when deleting a large dedup snapshot

2010-04-30 Thread Cindy Swearingen

Brandon,

You're probably hitting this CR:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924824

I'm tracking the existing dedup issues here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Thanks,

Cindy

On 04/29/10 23:11, Brandon High wrote:

I tried destroying a large (710GB) snapshot from a dataset that had
been written with dedup on. The host locked up almost immediately, but
there wasn't a stack trace on the console and the host required a
power cycle, but seemed to reboot normally. Once up, the snapshot was
still there. I was able to get a dump from this. The data was written
with b129, and the system is currently at b134.

I tried destroying it again, and the host started behaving badly.
'less' would hang, and there were several zfs-auto-snapshot processes
that were over an hour old, and the 'zfs snapshot' processes were
stuck on the first dataset of the pool. Eventually the host became
unusable and I rebooted again.

The host seems to be fine now, and is currently running a scrub.

Any ideas on how to avoid this in the future? I'm no longer using
dedup due to performance issues with it, which implies that the DDT is
very large.

bh...@basestar:~$ pfexec zdb -DD tank
DDT-sha256-zap-duplicate: 5339247 entries, size 348 on disk, 162 in core
DDT-sha256-zap-unique: 1479972 entries, size 1859 on disk, 1070 in core

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rename?

2010-05-03 Thread Cindy Swearingen

Hi Richard,

Renaming the root pool is not recommended. I have some details on what
actually breaks, but I can't find it now.

This limitation is described in the ZFS Admin Guide, but under the
LiveUpgrade section in the s10 version. I will add this limitation under
the general limitation section.

Obviously, you can't export a root pool that is in use, but you can
boot from alternate media and rename the exported root pool when it is
imported.

I simulated this process below.

Thanks,

Cindy


To restart the Solaris installation program,
type "install-solaris".
Solaris installation program exited.
# zpool import
  pool: rpool
id: 2186941205144212601
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

rpool   ONLINE
  c1t0d0s0  ONLINE
# zpool import rpool rpool2
cannot mount '/export': failed to create mountpoint
cannot mount '/export/home': failed to create mountpoint
cannot mount '/rpool': failed to create mountpoint
# zpool status
  pool: rpool2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool2  ONLINE   0 0 0
  c1t0d0s0  ONLINE   0 0 0

errors: No known data errors
# zpool export rpool2
# zpool import rpool2 rpool
cannot mount '/export': failed to create mountpoint
cannot mount '/export/home': failed to create mountpoint
cannot mount '/rpool': failed to create mountpoint
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 8.06G  58.9G96K  /rpool
rpool/ROOT4.50G  58.9G21K  legacy
rpool/ROOT/s10s_u9wos_07  4.50G  58.9G  4.49G  /
rpool/ROOT/s10s_u9wos...@now  4.80M  -  4.49G  -
rpool/dump1.50G  58.9G  1.50G  -
rpool/export44K  58.9G23K  /export
rpool/export/home   21K  58.9G21K  /export/home
rpool/swap2.06G  60.9G16K  -



On 05/02/10 07:33, Richard L. Hamilton wrote:

One can rename a zpool on import

zpool import -f pool_or_id newname

Is there any way to rename it (back again, perhaps)
on export?

(I had to rename rpool in an old disk image to access
some stuff in it, and I'd like to put it back the way it
was so it's properly usable if I ever want to boot off of it.)

But I suppose there must be other scenarios where that would
be useful too...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-03 Thread Cindy Swearingen

Hi Ned,

Yes, I agree  that it is a good idea not to update your root pool
version before restoring your existing root pool snapshots.

If you are using a later Solaris OS to recover your pool and root pool
snapshots, you can alway create the pool with a specific version, like
this:

# zpool create -o version=19 rpool c1t3d0s0

I will add this info to the root pool recovery process.

Thanks for the feedback...

Cindyr

On 04/30/10 22:46, Edward Ned Harvey wrote:

From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
Sent: Friday, April 30, 2010 10:46 AM

Hi Ned,

Unless I misunderstand what bare metal recovery means, the following
procedure describes how to boot from CD, recreate the root pool, and
restore the root pool snapshots:

http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view

I retest this process at every Solaris release.


You are awesome.  ;-)
When I said I was 90% certain, it turns out, that was a spot-on assessment
of my own knowledge.  I did not know about setting the "bootfs" property.

I see that you are apparently storing the "zfs send" datastream in a file.
Of course, discouraged, but no problem as long as it's no problem.  I
personally prefer to "zfs send | zfs receive" directly onto removable
storage.

One more really important gotcha.  Let's suppose the version of zfs on the
CD supports up to zpool 14.  Let's suppose your "live" system had been fully
updated before crash, and let's suppose the zpool had been upgraded to zpool
15.  Wouldn't that mean it's impossible to restore your rpool using the CD?
Wouldn't it mean it's impossible to restore the rpool using anything other
than a fully installed, and at least moderately updated on-hard-disk OS?
Maybe you could fully install onto hard disk 2 of the system, and then
upgrade, and then use that OS to restore the rpool onto disk 1 of the
system...

Would that be fuel to recommend people, "Never upgrade your version of zpool
or zfs on your rpool?"


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Spare in use althought disk is healthy ?

2010-05-03 Thread Cindy Swearingen

Hi Robert,

Could be a bug.

What kind of system and disks are reporting these errors?

Thanks,

Cindy

On 05/02/10 10:55, Lutz Schumann wrote:
Hello, 


thanks for the feedback and sorry for the delay in answering.

I checked the log and the fmadm. It seems the log does not show changes, however fmadm shows: 


Apr 23 2010 18:32:26.363495457 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 23 2010 18:32:26.363482031 ereport.io.scsi.cmd.disk.recovered

Same thing for the other disk: 


Apr 21 2010 15:02:24.117303285 ereport.io.scsi.cmd.disk.dev.rqs.derr
Apr 21 2010 15:02:24.117300448 ereport.io.scsi.cmd.disk.recovered

It seems there is a VERY short temp error. 

I will try to detach this. 

Is this a Bug ? 
Robert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Cindy Swearingen

Brandon,

Using beadm to migrate your BEs to another root pool (and then
performing all the steps to get the system to boot) is different
than just outright renaming your existing root pool on import.

Since pool renaming isn't supported, I don't think we have identified
all the boot/mount-at-boot components that need to be changed.

Cindy

On 05/03/10 18:34, Brandon High wrote:

On Mon, May 3, 2010 at 9:13 AM, Cindy Swearingen
 wrote:

Renaming the root pool is not recommended. I have some details on what
actually breaks, but I can't find it now.


Really? I asked about using a new pool for the rpool, and there were
some comments that it works fine. In fact, you'd suggested using beadm
to move the BE to the new pool.

On x86, grub looks at the findroot command, which checks
/rpool/boot/grub/bootsign/ (See
http://docs.sun.com/app/docs/doc/819-2379/ggvms?a=view)
The zpool should have the bootfs property set (although I've had it
work without this set). (See
http://docs.sun.com/app/docs/doc/819-2379/ggqhp?l=en&a=view)

To answer Richard's question, if you have to rename a pool during
import due to a conflict, the only way to change it back is to
re-import it with the original name. You'll have to either export the
conflicting pool, or (if it's rpool) boot off of a LiveCD which
doesn't use an rpool to do the rename.

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Cindy Swearingen

No, beadm doesn't take care of all the steps that I provided
previously and included below.

Cindy

You can use the OpenSolaris beadm command to migrate a ZFS BE over
to another root pool, but you will also need to perform some manual
migration steps, such as
- copy over your other rpool datasets
- recreate swap and dump devices
- install bootblocks
- update BIOS and GRUB entries to boot from new root pool

The BE recreation gets you part of the way and its fast, anyway.


!. Create the second root pool.

# zpool create rpool2 c5t1d0s0

2. Create the new BE in the second root pool.

# beadm create -p rpool2 osol2BE

3. Activate the new BE.

# beadm activate osol2BE

4. Install the boot blocks.

5. Test that the system boots from the second root pool.

6. Update BIOS and GRUB to boot from new pool.

On 05/04/10 11:04, Brandon High wrote:

On Tue, May 4, 2010 at 7:19 AM, Cindy Swearingen
 wrote:

Using beadm to migrate your BEs to another root pool (and then
performing all the steps to get the system to boot) is different
than just outright renaming your existing root pool on import.


Does beadm take care of all the other steps that need to happen? I
imagine you'd have to keep rpool around otherwise ...

I ended up doing an offline copy to a new pool, which I renamed to
rpool at the end to avoid any problems

-B


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diff between sharenfs and sharesmb

2010-05-04 Thread Cindy Swearingen

Hi Dick,

Experts on the cifs-discuss list could probably advise you better.
You might even check the cifs-discuss archive because I hear that
the SMB/NFS sharing scenario has been covered previously on that
list.

Thanks,

Cindy

On 05/04/10 03:06, Dick Hoogendijk wrote:
I have some ZFS datasets that are shared through CIFS/NFS. So I created 
them with sharenfs/sharesmb options.


I have full access from windows (through cifs) to the datasets, however, 
all files and directories are created with (UNIX) permisions of 
(--)/(d--). So, although I can access the files now from my 
windows machiens, I can -NOT- access the same files with NFS.
I know I gave myself full permissions in the ACL list. That's why 
sharesmb works I guess. But what do I have to do to make -BOTH- work?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-06 Thread Cindy Swearingen

Hi Bob,

You can review the latest Solaris 10 and OpenSolaris release dates here:

http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/059542.pdf

Solaris 10 release, CY2010
OpenSolaris release, 1st half CY2010

Thanks,

Cindy

On 05/05/10 18:03, Bob Friesenhahn wrote:

On Wed, 5 May 2010, Ray Van Dolson wrote:


From a zfs standpoint, Solaris 10 does not seem to be behind the
currently supported OpenSolaris release.


Well, being able to remove ZIL devices is one important feature
missing.  Hopefully in U9. :)


While the development versions of OpenSolaris are clearly well beyond 
Solaris 10, I don't believe that the supported version of OpenSolaris (a 
year old already) has this feature yet either and Solaris 10 has been 
released several times since then already.  When the forthcoming 
OpenSolaris release emerges in 2011, the situation will be far 
different.  Solaris 10 can then play catch-up with the release of U9 in 
2012.


Bob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup ration for iscsi-shared zfs dataset

2010-05-06 Thread Cindy Swearingen

Hi--

Even though the dedup property can be set on a file system basis,
dedup space usage is accounted for from the pool level by using
zpool list command.

My non-expert opinion is that it would be near impossible to report
space usage for dedup and non-dedup file systems at the file system
level.

More details are in the ZFS Dedup FAQ:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Thanks,

Cindy

On 05/06/10 12:31, eXeC001er wrote:

Hi.

How can i get this info?

Thanks.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hanging

2010-05-10 Thread Cindy Swearingen

Hi Eduardo,

Please use the following steps to collect more information:

1. Use the following command to get the PID of the zpool import process,
 like this:

# ps -ef | grep zpool

2. Use the actual  found in step 1 in the following
command, like this:

echo "0t::pid2proc|::walk thread|::findstack" | mdb -k

Then, send the output.

Thanks,

Cindy
On 05/10/10 14:22, Eduardo Bragatto wrote:

On May 10, 2010, at 4:46 PM, John Balestrini wrote:

Recently I had a similar issue where the pool wouldn't import and 
attempting to import it would essentially lock the server up. Finally 
I used pfexec zpool import -F pool1 and simply let it do it's thing. 
After almost 60 hours the imported finished and all has been well 
since (except my backup procedures have improved!).


Hey John,

thanks a lot for answering -- I already allowed the "zpool import" 
command to run from Friday to Monday and it did not complete -- I also 
made sure to start it using "truss" and literally nothing has happened 
during that time (the truss output file does not have anything new).


While the "zpool import" command runs, I don't see any CPU or Disk I/O 
usage. "zpool iostat" shows very little I/O too:


# zpool iostat -v
 capacity operationsbandwidth
pool   used  avail   read  write   read  write
  -  -  -  -  -  -
backup31.4T  19.1T 11  2  29.5K  11.8K
  raidz1  11.9T   741G  2  0  3.74K  3.35K
c3t102d0  -  -  0  0  23.8K  1.99K
c3t103d0  -  -  0  0  23.5K  1.99K
c3t104d0  -  -  0  0  23.0K  1.99K
c3t105d0  -  -  0  0  21.3K  1.99K
c3t106d0  -  -  0  0  21.5K  1.98K
c3t107d0  -  -  0  0  24.2K  1.98K
c3t108d0  -  -  0  0  23.1K  1.98K
  raidz1  12.2T   454G  3  0  6.89K  3.94K
c3t109d0  -  -  0  0  43.7K  2.09K
c3t110d0  -  -  0  0  42.9K  2.11K
c3t111d0  -  -  0  0  43.9K  2.11K
c3t112d0  -  -  0  0  43.8K  2.09K
c3t113d0  -  -  0  0  47.0K  2.08K
c3t114d0  -  -  0  0  42.9K  2.08K
c3t115d0  -  -  0  0  44.1K  2.08K
  raidz1  3.69T  8.93T  3  0  9.42K610
c3t87d0   -  -  0  0  43.6K  1.50K
c3t88d0   -  -  0  0  43.9K  1.48K
c3t89d0   -  -  0  0  44.2K  1.49K
c3t90d0   -  -  0  0  43.4K  1.49K
c3t91d0   -  -  0  0  42.5K  1.48K
c3t92d0   -  -  0  0  44.5K  1.49K
c3t93d0   -  -  0  0  44.8K  1.49K
  raidz1  3.64T  8.99T  3  0  9.40K  3.94K
c3t94d0   -  -  0  0  31.9K  2.09K
c3t95d0   -  -  0  0  31.6K  2.09K
c3t96d0   -  -  0  0  30.8K  2.08K
c3t97d0   -  -  0  0  34.2K  2.08K
c3t98d0   -  -  0  0  34.4K  2.08K
c3t99d0   -  -  0  0  35.2K  2.09K
c3t100d0  -  -  0  0  34.9K  2.08K
  -  -  -  -  -  -

Also, the third "raidz" entry shows less "write" in bandwidth (610). 
This is actually the first time it's a non-zero value.


My last attempt to import it, was using this command:

zpool import -o failmode=panic -f -R /altmount backup

However it did not panic. As I mentioned in the first message, it mounts 
189 filesystems and hangs on #190. While the command is hanging, I can 
use "zfs mount" to mount filesystems #191 and above (only one filesystem 
does not mount and causes the import procedure to hang).


Before trying the command above, I was using only "zpool import backup", 
and the "iostat" output was showing ZERO for the third raidz from the 
list above (not sure if that means something, but it does look odd).


I'm really on a dead end here, any help is appreciated.

Thanks,
Eduardo Bragatto.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] spares bug: explain to me status of bug report.

2010-05-18 Thread Cindy Swearingen

Hi--

The scenario in the bug report below is that the pool is exported.

The spare can't kick in if the pool is exported. It looks like the
issue reported in this CR's See Also section, CR 6887163 is still
open.

Thanks,

Cindy

On 05/18/10 11:19, eXeC001er wrote:

Hi.

In bugster i found bug about spares. 
I can to reproduce the problem. but developer set status "Not a defect". 
Why?


http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905317

Thanks.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] spares bug: explain to me status of bug report.

2010-05-18 Thread Cindy Swearingen

I think the remaining CR is this one:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6911420

cs

On 05/18/10 12:08, eXeC001er wrote:
6887163 
 11-Closed:Duplicate (Closed)
6945634 
 11-Closed:Duplicate (Closed)



2010/5/18 Cindy Swearingen <mailto:cindy.swearin...@oracle.com>>


Hi--

The scenario in the bug report below is that the pool is exported.

The spare can't kick in if the pool is exported. It looks like the
issue reported in this CR's See Also section, CR 6887163 is still
open.

Thanks,

Cindy


On 05/18/10 11:19, eXeC001er wrote:

Hi.

In bugster i found bug about spares. I can to reproduce the
problem. but developer set status "Not a defect". Why?

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905317

Thanks.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org <mailto:zfs-discuss@opensolaris.org>
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root pool mirror problems

2010-05-20 Thread Cindy Swearingen

Hi Roi,

You need equivalent sized disks for a mirrored pool. When you attempt to 
attach a disk that is too small, you will see a message similar to the

following:

cannot attach c1t3d0 to c1t2d0: device is too small

In general, an "I/O error" message means that the partition slice is not 
available or ZFS is having trouble accessing the disk so you might have

multiple issues to resolve.

Review the ZFS troubleshooting wiki for info on resolving the I/O error,
here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Thanks,

Cindy

On 05/20/10 02:39, roi shidlovsky wrote:
hi. 
i am trying to attach a mirror disk to my root pool. if the two disk are the same size.. it all works fine, but if the two disks are with different size (8GB and 7.5GB) i get a "I/O error" on the attach command.


can anybody tell me what am i doing wrong?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tank zpool has tanked out :(

2010-05-21 Thread Cindy Swearingen

Andreas,

Does the pool tank actually have 6 disks c7t0-c7t5 and c7t3d0 is now
masking c7t5d0 or it is a 5-disk configuration with c7t5 repeated twice?
If it is the first case (c7t0-c7t5), then I would check how these
devices are connected before attempting to replace the c7t3d0 disk.

What does the format utility display for these devices?

I haven't seen this error but others on this list have resolved this
problem by exporting and importing the pool.

Always have good backups of your data.

Thanks,

Cindy

On 05/21/10 03:26, Andreas Iannou wrote:

Hi there,
 
My zpool tank has been chugging along nicely but after a failed attempt 
at offlining a misbehaving drive I've got a wierd sitation.
 
  pool: tank

 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
 scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  raidz1-0  DEGRADED 0 0 0
c7t0d0  ONLINE   0 0 0
c7t4d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
*c7t3d0  ONLINE   0 0 0*
c7t2d0  ONLINE   0 0 0
*c7t3d0  OFFLINE  0 0 0*
errors: No known data errors

Why does that particular drive appear twice? I am on SNV_134. They are 
6x500Gb Western Digital RE drives. I have a spare 500Gb on another 
controller (c0t0d0) which I want to use to keep replace the (probably 
dying) drive but I'm not sure I can do this and it will correctly remove 
the one I want:


# zpool replace tank c7t3d0 c0t0d0
 
Any ideas?
 
Gracias,

Andre


Find it at CarPoint.com.au New, Used, Demo, Dealer or Private? 






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about zpool iostat output

2010-05-25 Thread Cindy Swearingen

Hi Thomas,

This looks like a display bug. I'm seeing it too.

Let me know which Solaris release you are running and
I will file a bug.

Thanks,

Cindy

On 05/25/10 01:42, Thomas Burgess wrote:

I was just wondering:

I added a SLOG/ZIL to my new system today...i noticed that the L2ARC 
shows up under it's own headingbut the SLOG/ZIL doesn'tis this 
correct?



see:



   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
--  -  -  -  -  -  -
tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
  c6t5d0s0  0  8.94G  0  0  0  0
cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M



It seems sort of strange to me that it doesn't look like this instead:






   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
rpool   15.3G  44.2G  0  0  0  0
  c6t4d0s0  15.3G  44.2G  0  0  0  0
--  -  -  -  -  -  -
tank10.9T  7.22T  0  2.43K  0   300M
  raidz210.9T  7.22T  0  2.43K  0   300M
c4t6d0  -  -  0349  0  37.6M
c4t5d0  -  -  0350  0  37.6M
c5t7d0  -  -  0350  0  37.6M
c5t3d0  -  -  0350  0  37.6M
c8t0d0  -  -  0354  0  37.6M
c4t7d0  -  -  0351  0  37.6M
c4t3d0  -  -  0350  0  37.6M
c5t8d0  -  -  0349  0  37.6M
c5t0d0  -  -  0348  0  37.6M
c8t1d0  -  -  0353  0  37.6M
log   -  -  -  -  -  -
  c6t5d0s0  0  8.94G  0  0  0  0
cache   -  -  -  -  -  -
  c6t5d0s1  37.5G  0  0158  0  19.6M








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unsetting the bootfs property possible? imported a FreeBSD pool

2010-05-25 Thread Cindy Swearingen

Hi Reshekel,

You might review these resources for information on using ZFS without
having to hack code:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs

ZFS Administration Guide

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

I will add a section on migrating from FreeBSD because this problem
comes up often enough. You might search the list archive for this
problem to see how others have resolved the partition issues.

Moving ZFS storage pools from a FreeBSD system to a Solaris system is
difficult because it looks like FreeBSD uses the disk's p0 partition
and in Solaris releases, ZFS storage pools are either created with
whole disks by using the d0 identifier or root pools, which are created
by using the disk slice identifier (s0). This is an existing boot
limitation.

For example, see the difference in the two pools:

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0
  pool: dozer
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dozer   ONLINE   0 0 0
  c2t5d0ONLINE   0 0 0
  c2t6d0ONLINE   0 0 0

errors: No known data errors


If you want to boot from a ZFS storage pool then you must create the
pool with slices. This is why you see the message about EFI labels
because pools that are created with whole disks use an EFI label and
Solaris doesn't boot from an EFI label.

You can add a cache device to a pool reserved for booting, but you
must create a disk slice and then, add the cache device like this:

# zpool add rpool cache c1t2d0s0
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0
cache
  c1t2d0s0ONLINE   0 0 0


I suggest creating two pools, one small pool for booting and one larger
pool for data storage.

Thanks,

Cindy
On 05/25/10 02:58, Reshekel Shedwitz wrote:

Greetings -

I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am 
in what seems to be a weird situation regarding this pool. Maybe someone can 
help.

I used to boot off of this pool in FreeBSD, so the bootfs property got set:

r...@nexenta:~# zpool get bootfs tank
NAME  PROPERTY  VALUE   SOURCE
tank  bootfstanklocal

The presence of this property seems to be causing me all sorts of headaches. I 
cannot replace a disk or add a L2ARC because the presence of this flag is how 
ZFS code (libzfs_pool.c: zpool_vdev_attach and zpool_label_disk) determines if 
a pool is allegedly a root pool.

r...@nexenta:~# zpool add tank cache c1d0
cannot label 'c1d0': EFI labeled devices are not supported on root pools.

To replace disks, I was able to hack up libzfs_zpool.c and create a new custom 
version of the zpool command. That works, but this is a poor solution going 
forward because I have to be sure I use my customized version every time I 
replace a bad disk.

Ultimately, I would like to just set the bootfs property back to default, but this seems to be beyond my ability. There are some checks in libzfs_pool.c that I can bypass in order to set the value back to its default of "-", but ultimately I am stopped because there is code in zfs_ioctl.c, which I believe is kernel code, that checks to see if the bootfs value supplied is actually an existing dataset. 


I'd compile my own kernel but hey, this is only my first day using OpenSolaris 
- it was a big enough feat just learning how to compile stuff in the ON source 
tree :D

What should I do here? Is there some obvious solution I'm missing? I'd like to 
be able to get my pool back to a state where I can use the *stock* zpool 
command to maintain it. I don't boot off of this pool anymore and if I could 
somehow set the boot.

BTW, for reference, here is the output of zpool status (after I hacked up zpool 
to let me add a l2arc):

  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scan: resilvered 351G in 2h44m with 0 errors on Tue May 25 23:33:38 2010
config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  raidz2-0ONLINE   0 0 0
c2t5d0p0  ONLINE   0 0 0
c2t4d0p0  ONLINE   0 0 0
 

Re: [zfs-discuss] unsetting the bootfs property possible? imported a FreeBSD pool

2010-05-25 Thread Cindy Swearingen

Hi--

I apologize for missing understanding your original issue.

Regardless of the original issues and the fact that current Solaris
releases do not let you set the bootfs property on a pool that has a
disk with an EFI label, the secondary bug here is not being able to
remove a bootfs property on a pool that has a disk with an EFI label.
If this helps with the migration of pools, then we should allow you
to remove the bootfs property.

I will file this bug on your behalf.

In the meantime, I don't see how you can resolve the problem on this
pool.

Thanks,

Cindy


On 05/25/10 09:42, Reshekel Shedwitz wrote:
Cindy, 


Thanks for your reply. The important details may have been buried in my post, I 
will repeat them again to make it more clear:

(1) This was my boot pool in FreeBSD, but I do not think the partitioning 
differences are really the issue. I can import the pool to nexenta/opensolaris 
just fine.

Furthermore, this is *no longer* being used as a root pool in nexenta. I 
purchased an SSD for the purpose of booting nexenta. This pool is used purely 
for data storage - no booting.

(2) I had to hack the code because zpool is forbidding me from adding or 
replacing devices - please see my logs in the previous post.

zpool thinks this pool is a "boot pool" due to the bootfs flag being set, and 
zpool will not let me unset the bootfs property. So I'm stuck in a situation where zpool 
thinks my pool is a boot pool because of the bootfs property, and zpool will not let me 
unset the bootfs property. Because zpool thinks this pool is the boot pool, it is trying 
to forbid me from creating a configuration that isn't compatible with booting.

In this situation, I am unable to add or replace devices without using my 
hacked version of zpool.

I was able to hack the code to allow zpool to replace and add devices, but I 
was not able to figure out how to set the bootfs property back to the default 
value.

Does this help explain my situation better? I think this is a bug, or maybe I'm 
missing something totally obvious.

Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unsetting the bootfs property possible? imported a FreeBSD pool

2010-05-26 Thread Cindy Swearingen

Hi--

I'm glad you were able to resolve this problem.

I drafted some hints in this new section:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues

We had all the clues, you and Brandon got it though.

I think my brain function was missing yesterday.

Thanks,

Cindy

On 05/25/10 16:59, Reshekel Shedwitz wrote:

Cindy,

Thanks. Same goes to everyone else on this thread.

I actually solved the issue - I booted back into FreeBSD's "Fixit" mode and was still able to import the pool (wouldn't have been able to if I upgraded the pool version!). FreeBSD's zpool command allowed me to unset the bootfs property. 

I guess that should have been more obvious to me. At least now I'm in good shape as far as this pool goes - zpool won't complain when I try to replace disks or add cache. 


Might be worth documenting this somewhere as a "gotcha" when migrating from 
FreeBSD to OpenSolaris.

Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unsetting the bootfs property possible? imported a FreeBSD pool

2010-05-26 Thread Cindy Swearingen

Hi Lori,

I haven't filed it yet.

We need to file a CR that allows us to successfully set bootfs to "".

The failure case in this thread was attempting to unset bootfs on
a pool with disks that have EFI labels.

Thanks,

Cindy

On 05/26/10 14:09, Lori Alt wrote:


Was a bug ever filed against zfs for not allowing the bootfs property to 
be set to ""?  We should always let that request succeed.


lori

On 05/26/10 09:09 AM, Cindy Swearingen wrote:

Hi--

I'm glad you were able to resolve this problem.

I drafted some hints in this new section:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues 



We had all the clues, you and Brandon got it though.

I think my brain function was missing yesterday.

Thanks,

Cindy

On 05/25/10 16:59, Reshekel Shedwitz wrote:

Cindy,

Thanks. Same goes to everyone else on this thread.

I actually solved the issue - I booted back into FreeBSD's "Fixit" 
mode and was still able to import the pool (wouldn't have been able 
to if I upgraded the pool version!). FreeBSD's zpool command allowed 
me to unset the bootfs property.
I guess that should have been more obvious to me. At least now I'm in 
good shape as far as this pool goes - zpool won't complain when I try 
to replace disks or add cache.
Might be worth documenting this somewhere as a "gotcha" when 
migrating from FreeBSD to OpenSolaris.


Thanks!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-05-27 Thread Cindy Swearingen

Cassandra,

Which Solaris release is this?

This is working for me between an Solaris 10 server and a OpenSolaris 
client.


Nested mount points can be tricky and I'm not sure if you are looking
for the mirror mount feature that is not available in the Solaris 10
release, where new directory contents are accessible on the client.

See the examples below.


Thanks,

Cindy

On the server:

# zpool create pool c1t3d0
# zfs create pool/myfs1
# cp /usr/dict/words /pool/myfs1/file.1
# zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2
# ls /pool/myfs1
file.1  myfs2
# cp /usr/dict/words /pool/myfs1/myfs2/file.2
# ls /pool/myfs1/myfs2/
file.2
# zfs set sharenfs=on pool/myfs1
# zfs set sharenfs=on pool/myfs2
# share
-   /pool/myfs1   rw   ""
-   /pool/myfs1/myfs2   rw   "

On the client:

# ls /net/t2k-brm-03/pool/myfs1
file.1  myfs2
# ls /net/t2k-brm-03/pool/myfs1/myfs2
file.2
# mount -F nfs t2k-brm-03:/pool/myfs1 /mnt
# ls /mnt
file.1  myfs2
# ls /mnt/myfs2
file.2

On the server:

# touch /pool/myfs1/myfs2/file.3

On the client:

# ls /mnt/myfs2
file.2  file.3

On 05/27/10 14:02, Cassandra Pugh wrote:
 I was wondering if there is a special option to share out a set of 
nested
   directories?  Currently if I share out a directory with 
/pool/mydir1/mydir2
   on a system, mydir1 shows up, and I can see mydir2, but nothing in 
mydir2.

   mydir1 and mydir2 are each a zfs filesystem, each shared with the proper
   sharenfs permissions.
   Did I miss a browse or traverse option somewhere?
   -
   Cassandra
 Unix Administrator
   "From a little spark may burst a mighty flame."
   -Dante Alighieri





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] expand zfs for OpenSolaris running inside vm

2010-05-28 Thread Cindy Swearingen

Hi--

I can't speak to running a ZFS root pool in a VM, but the problem is
that you can't add another disk to a root pool. All the boot info needs
to be contiguous. This is a boot limitation.

I've not attempted either of these operations in a VM but you might
consider:

1. Replacing the root pool disk with a larger disk
2. Attaching a larger disk to the root pool and then detaching
 the smaller disk

I like #2 best. See this section in the ZFS troubleshooting wiki:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

Thanks,

Cindy

On 05/28/10 12:54, me wrote:

hello, all

I am have constraint disk space (only 8GB) while running os inside vm. 
Now i want to add more. It is easy to add for vm but how can i update fs 
in os?


I cannot use autoexpand because it doesn't implemented in my system:
$ uname -a
SunOS sopen 5.11 snv_111b i86pc i386 i86pc
If it was 171 it would be grate, right?

Doing following:

o added new virtual HDD (it becomes /dev/rdsk/c7d1s0)
o run format, write label

# zpool status
  pool: rpool
 state: ONLINE
 scrub: scrub completed after 0h10m with 0 errors on Fri May 28 16:47:05 
2010

config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c7d0s0ONLINE   0 0 0

errors: No known data errors

# zpool add rpool c7d1
cannot label 'c7d1': EFI labeled devices are not supported on root pools.

# prtvtoc /dev/rdsk/c7d0s0 | fmthard -s - /dev/rdsk/c7d1s0
fmthard:  New volume table of contents now in place.

# zpool add rpool c7d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c7d1s0 overlaps with /dev/dsk/c7d1s2

# zpool add -f rpool c7d1s0
cannot add to 'rpool': root pool can not have multiple vdevs or separate 
logs


o omg, i tried all the magic command that i found at internet and in 
tfm. now writing to maillist :-). Help!


--
Dmitry




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [RESOLVED] Re: expand zfs for OpenSolaris running inside vm

2010-06-01 Thread Cindy Swearingen

Hi--

The purpose of the ZFS dump volume is to provide space for a
system crash dump. You can choose not to have one, I suppose,
but you wouldn't be able to collect valuable system info.

Thanks,

Cindy

On 05/30/10 11:28, me wrote:

Reinstalling grub helped.

What is the purpose of dump slice?

On Sun, May 30, 2010 at 9:05 PM, me <mailto:dea...@gmail.com>> wrote:


Thanks! It is exactly i was looking for.


On Sat, May 29, 2010 at 12:44 AM, Cindy Swearingen
mailto:cindy.swearin...@oracle.com>>
wrote:

2. Attaching a larger disk to the root pool and then detaching
 the smaller disk

I like #2 best. See this section in the ZFS troubleshooting wiki:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk


Size of pool is changed, i updated swap size too. Now i have
detached old disk.
I did

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0


Reboot and fail to startup. Grub is loads, os loading screen shows
and then restart :(. I loaded rescue disc console but don't know
what to do.

-- 
Dmitry



--
Dmitry




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inheritance

2010-06-01 Thread Cindy Swearingen

Hi--

I'm no user property expert, but I have some syntax for you to try to
resolve these problems. See below.

Maybe better ways exist but one way to correct datapool inheritance of
com.sun:auto-snapshot:dailyy is to set it to false, inherit the false
setting, then reset the correct property at the datapool/system level.

If this is what happened:

# zfs set com.sun:auto-snapshot:dailyy=true datapool
# zfs get com.sun:auto-snapshot:dailyy datapool datapool/system
NAME PROPERTY  VALUE   SOURCE
datapool com.sun:auto-snapshot:dailyy  truelocal
datapool/system  com.sun:auto-snapshot:dailyy  trueinherited from 
datapool



Fix like this:

# zfs set com.sun:auto-snapshot:dailyy=false datapool
# zfs inherit -r com.sun:auto-snapshot:dailyy datapool
# zfs get com.sun:auto-snapshot:dailyy datapool datapool/system
NAMEPROPERTY  VALUE   SOURCE
datapoolcom.sun:auto-snapshot:dailyy  -   -
datapool/system com.sun:auto-snapshot:dailyy  -   -

Then, reset at the right level:

# zfs set com.sun:auto-snapshot:daily=true datapool/system
# zfs get com.sun:auto-snapshot:daily datapool datapool/system
NAME PROPERTY VALUE   SOURCE
datapool com.sun:auto-snapshot:daily  -   -
datapool/system  com.sun:auto-snapshot:daily  truelocal


Thanks,

Cindy




On 05/31/10 06:44, wojciech wrote:

Hi,

I created couple zfs fs: 

datapool 
datapool/system

datapool/system/mikkel
datapool/users
datapool/users/john 
...
 
I have set  com.sun:auto-snapshot:daily to true on datapool/users and set inherit -r to datapool/users. 


datapool/users/john com.sun:auto-snapshot:daily  true inherited from 
dataPool/users

I wanted to do the same with system users:
but i didn't notice that instead of datapool/system  I set datapool in inherit 
-r and all users under datapool/system inherits settings from datapool.

How to revert this setting to default or change to correct one.

I have also mistype feature com.sun:auto-snapshot:dailyy with double yy how to 
get rig of it.
thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot destroy ... dataset already exists

2010-06-02 Thread Cindy Swearingen

Hi Ned,

If you do incremental receives, this might be CR 6860996:

%temporary clones are not automatically destroyed on error

A temporary clone is created for an incremental receive and
in some cases, is not removed automatically.

Victor might be able to describe this better, but consider
the following steps as further diagnosis or a workaround:

1. Determine clone names:

# zdb -d  | grep %

2. Destroy identified clones:
# zfs destroy 

It will complain that 'dataset does not exist', but you can check
again(see 1)

3. Destroy snapshot(s) that could not be destroyed previously

Thanks,

Cindy

On 06/02/10 08:42, Edward Ned Harvey wrote:

This is the problem:

[r...@nasbackup backup-scripts]# zfs destroy 
storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30


cannot destroy 
'storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30': dataset 
already exists


 

This is apparently a common problem.  It's happened to me twice already, 
and the third time now.  Each time it happens, it's on the "backup" 
server, so fortunately, I have total freedom to do whatever I want, 
including destroy the pool.


 

The previous two times, I googled around, basically only found "destroy 
the pool" as a solution, and I destroyed the pool.


 

This time, I would like to dedicate a little bit of time and resource to 
finding the cause of the problem, so hopefully this can be fixed for 
future users, including myself.  This time I also found "apply updates 
and repeat your attempt to destroy the snapshot"  ...  So I applied 
updates, and repeated.  But no improvement.  The OS was sol 10u6, but 
now it’s fully updated.  Problem persists.


 


I’ve also tried exporting and importing the pool.

 

Somebody on the Internet suspected the problem is somehow aftermath of 
killing a "zfs send" or receive.  This is distinctly possible, as I’m 
sure that’s happened on my systems.  But there is currently no send or 
receive being killed ... Any such occurrence is long since past, and 
even beyond reboots and such.


 

I do not use clones.  There are no clones of this snapshot anywhere, and 
there never have been.


 

I do have other snapshots, which were incrementally received based on 
this one.  But that shouldn't matter, right?


 

I have not yet called support, although we do have a support contract. 

 


Any suggestions?

 


FYI:

 


[r...@nasbackup backup-scripts]# zfs list

NAME   USED  
AVAIL  REFER  MOUNTPOINT


rpool 
19.3G   126G34K  /rpool


rpool/ROOT
16.3G   126G21K  legacy


rpool/ROOT/nasbackup_slash
16.3G   126G  16.3G  /


rpool/dump1.00G  
 126G  1.00G  -


rpool/swap
2.00G   127G  1.08G  -


storagepool   1.28T  
4.06T  34.4K  /storage


storagepool/nas-lyricpool 1.27T  
4.06T  1.13T  /storage/nas-lyricpool


storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30   
94.1G  -  1.07T  -


storagepool/nas-lyricp...@daily-2010-06-01-00-00-00   
0  -  1.13T  -


storagepool/nas-rpool-ROOT-nas_slash  8.65G  
4.06T  8.65G  /storage/nas-rpool-ROOT-nas_slash


storagepool/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00
0  -  8.65G  -


zfs-external1 
1.13T   670G24K  /zfs-external1


zfs-external1/nas-lyricpool   
1.12T   670G  1.12T  /zfs-external1/nas-lyricpool


zfs-external1/nas-lyricp...@daily-2010-06-01-00-00-00 
0  -  1.12T  -


zfs-external1/nas-rpool-ROOT-nas_slash
8.60G   670G  8.60G  /zfs-external1/nas-rpool-ROOT-nas_slash


zfs-external1/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00  
0  -  8.60G  -


 


And

 


[r...@nasbackup ~]# zfs get origin

NAME   
   PROPERTY  VALUE   SOURCE


rpool 
origin-   -


rpool/ROOT
origin-   -


rpool/ROOT/nasbackup_slash
origin-   -


rpool/dump
origin-   -


rpool/swap
origin-   -


storagepool   
origin-   -


storagepool/nas-lyricpool 
origin-   -


storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30   
origin-   -


storagepool/nas

Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-03 Thread Cindy Swearingen

Hi Cassandra,

The mirror mount feature allows the client to access files and dirs that 
are newly created on the server, but this doesn't look like your problem

described below.

My guess is that you need to resolve the username/permission issues
before this will work, but some versions of Linux don't support
traversing nested mount points.

I'm no NFS expert and many on this list are, but things to check are:

- I'll assume that hostnames are resolving between systems since
you can share/mount the resources.

- If you are seeing "nobody" instead of user names, then you need to
make sure the domain name is specified in NFSMAPID_DOMAIN. For example,
add company.com to the /etc/default/nfs file and then restart this
server:
# svcs | grep mapid
online May_27   svc:/network/nfs/mapid:default
# svcadm restart svc:/network/nfs/mapid:default

- Permissions won't resolve correctly until the above two issues are
cleared.

- You might be able to rule out the Linux client support of nested
mount points by just sharing a simple test dataset, like this:

# zfs create mypool/test
# cp /usr/dict/words /mypool/test/file.1
# zfs set sharenfs=on mypool/test

and see if file.1 is visible on the Linux client.

Thanks,

Cindy

On 06/03/10 11:53, Cassandra Pugh wrote:

Thanks for getting back to me!

I am using Solaris 10 10/09 (update 8)

I have created multiple nested zfs directories in order to compress some 
but not all sub directories in a directory.
I have ensured that they all have a sharenfs option, as I have done with 
other shares.


This is a special case to me, since instead of just
#zfs create pool/mydir

and then just using mkdir to make everything thereafter, I have done:
 #zfs create mypool/mydir/
 #zfs create mypool/mydir/dir1
 #zfs create mypool/mydir/dir1/compressed1
#zfs create mypool/mydir/dir1/compressedir2
#zfs create mypool/mydir/dir1/uncompressedir


i had hoped that i would then export this, and mount it on the client 
and see:

#ls  /mnt/mydir/*

dir:
compressedir1 compressedir2 uncompressedir

and the files thereafter.

however  what i see is :

#ls /mnt/mydir/*

dir:

My client is linux. I would assume we are using nfs v3. 
I also notice that the permissions are not showing through correctly.

The mount options used are our "defaults" (hard,rw,nosuid,nodev,intr,noacl)


I am not sure what this mirror mounting is?  Would that help me?
Is there something else I could be doing to approach this better?

Thank you for your insight.

-

Cassandra
Unix Administrator


On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen 
mailto:cindy.swearin...@oracle.com>> wrote:


Cassandra,

Which Solaris release is this?

This is working for me between an Solaris 10 server and a
OpenSolaris client.

Nested mount points can be tricky and I'm not sure if you are looking
for the mirror mount feature that is not available in the Solaris 10
release, where new directory contents are accessible on the client.

See the examples below.


Thanks,

Cindy

On the server:

# zpool create pool c1t3d0
# zfs create pool/myfs1
# cp /usr/dict/words /pool/myfs1/file.1
# zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2
# ls /pool/myfs1
file.1  myfs2
# cp /usr/dict/words /pool/myfs1/myfs2/file.2
# ls /pool/myfs1/myfs2/
file.2
# zfs set sharenfs=on pool/myfs1
# zfs set sharenfs=on pool/myfs2
# share
-   /pool/myfs1   rw   ""
-   /pool/myfs1/myfs2   rw   "

On the client:

# ls /net/t2k-brm-03/pool/myfs1
file.1  myfs2
# ls /net/t2k-brm-03/pool/myfs1/myfs2
file.2
# mount -F nfs t2k-brm-03:/pool/myfs1 /mnt
# ls /mnt
file.1  myfs2
# ls /mnt/myfs2
file.2

On the server:

# touch /pool/myfs1/myfs2/file.3

On the client:

# ls /mnt/myfs2
file.2  file.3


On 05/27/10 14:02, Cassandra Pugh wrote:

I was wondering if there is a special option to share out a
set of nested
  directories?  Currently if I share out a directory with
/pool/mydir1/mydir2
  on a system, mydir1 shows up, and I can see mydir2, but
nothing in mydir2.
  mydir1 and mydir2 are each a zfs filesystem, each shared with
the proper
  sharenfs permissions.
  Did I miss a browse or traverse option somewhere?
  -
  Cassandra
Unix Administrator
  "From a little spark may burst a mighty flame."
  -Dante Alighieri





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org <mailto:zfs-discuss@opensolaris.org>
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discus

Re: [zfs-discuss] nfs share of nested zfs directories?

2010-06-03 Thread Cindy Swearingen



If your other single ZFS shares are working, then I think the answer is
that the Linux client version doesn't support the nested access feature,
I'm guessing.

You could also test the nested access between your Solaris 10 10/09
server and a Solaris 10 10/09 client, if possible, to be sure this is a
Linux client issue, and not a different configuration problem.

Cindy

On 06/03/10 13:50, Cassandra Pugh wrote:
No usernames is not an issue.  I have many shares that work, but they 
are single zfs file systems. 
The special case here is that I am trying to traverse NESTED zfs 
systems, for the purpose of having compressed and uncompressed 
directories. 



-
Cassandra
(609) 243-2413
Unix Administrator


"From a little spark may burst a mighty flame."
-Dante Alighieri


On Thu, Jun 3, 2010 at 3:00 PM, Cindy Swearingen 
mailto:cindy.swearin...@oracle.com>> wrote:


Hi Cassandra,

The mirror mount feature allows the client to access files and dirs
that are newly created on the server, but this doesn't look like
your problem
described below.

My guess is that you need to resolve the username/permission issues
before this will work, but some versions of Linux don't support
traversing nested mount points.

I'm no NFS expert and many on this list are, but things to check are:

- I'll assume that hostnames are resolving between systems since
you can share/mount the resources.

- If you are seeing "nobody" instead of user names, then you need to
make sure the domain name is specified in NFSMAPID_DOMAIN. For example,
add company.com <http://company.com> to the /etc/default/nfs file
and then restart this
server:
# svcs | grep mapid
online May_27   svc:/network/nfs/mapid:default
# svcadm restart svc:/network/nfs/mapid:default

- Permissions won't resolve correctly until the above two issues are
cleared.

- You might be able to rule out the Linux client support of nested
mount points by just sharing a simple test dataset, like this:

# zfs create mypool/test
# cp /usr/dict/words /mypool/test/file.1
# zfs set sharenfs=on mypool/test

and see if file.1 is visible on the Linux client.

Thanks,

Cindy


On 06/03/10 11:53, Cassandra Pugh wrote:

Thanks for getting back to me!

I am using Solaris 10 10/09 (update 8)

I have created multiple nested zfs directories in order to
compress some but not all sub directories in a directory.
I have ensured that they all have a sharenfs option, as I have
done with other shares.

This is a special case to me, since instead of just
#zfs create pool/mydir

and then just using mkdir to make everything thereafter, I have
done:
 #zfs create mypool/mydir/
 #zfs create mypool/mydir/dir1
 #zfs create mypool/mydir/dir1/compressed1
#zfs create mypool/mydir/dir1/compressedir2
#zfs create mypool/mydir/dir1/uncompressedir


i had hoped that i would then export this, and mount it on the
client and see:
#ls  /mnt/mydir/*

dir:
compressedir1 compressedir2 uncompressedir

and the files thereafter.

however  what i see is :

#ls /mnt/mydir/*

dir:

My client is linux. I would assume we are using nfs v3. I also
notice that the permissions are not showing through correctly.
The mount options used are our "defaults"
(hard,rw,nosuid,nodev,intr,noacl)


I am not sure what this mirror mounting is?  Would that help me?
Is there something else I could be doing to approach this better?

Thank you for your insight.

-

Cassandra
Unix Administrator


On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen
mailto:cindy.swearin...@oracle.com>
<mailto:cindy.swearin...@oracle.com
<mailto:cindy.swearin...@oracle.com>>> wrote:

   Cassandra,

   Which Solaris release is this?

   This is working for me between an Solaris 10 server and a
   OpenSolaris client.

   Nested mount points can be tricky and I'm not sure if you are
looking
   for the mirror mount feature that is not available in the
Solaris 10
   release, where new directory contents are accessible on the
client.

   See the examples below.


   Thanks,

   Cindy

   On the server:

   # zpool create pool c1t3d0
   # zfs create pool/myfs1
   # cp /usr/dict/words /pool/myfs1/file.1
   # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2
   # ls /pool/myfs1
   file.1  myfs2
   # cp /usr/dict/words /pool/myfs1/myfs2/file.2
   # ls /pool/myfs1/myfs2/
   file.2
  

Re: [zfs-discuss] Migrating to ZFS

2010-06-04 Thread Cindy Swearingen

Frank,

The format utility is not technically correct because it refers to
slices as partitions. Check the output below.

We might describe that the "partition" menu is used to partition the
disk into slices, but all of format refers to partitions, not slices.

I agree with Brandon's explanation, but no amount of explanation
resolves the confusion for those unfamiliar with how we use the
same term to describe different disk components.

Cindy

format> p


PARTITION MENU:
0  - change `0' partition
1  - change `1' partition
2  - change `2' partition
3  - change `3' partition
4  - change `4' partition
5  - change `5' partition
6  - change `6' partition
expand - expand label to use whole disk
select - select a predefined table
modify - modify a predefined partition table
name   - name the current table
print  - display the current table
label  - write partition map and label to the disk
! - execute , then return
quit
partition> p
Current partition table (original):
Total disk sectors available: 286722878 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm   256  136.72GB  286722911
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2867229128.00MB  286739295

partition>




On 06/04/10 15:43, Frank Cusack wrote:

On 6/4/10 11:46 AM -0700 Brandon High wrote:

Be aware that Solaris on x86 has two types of partitions. There are
fdisk partitions (c0t0d0p1, etc) which is what gparted, windows and
other tools will see. There are also Solaris partitions or slices
(c0t0d0s0). You can create or edit these with the 'format' command in
Solaris. These are created in an fdisk partition that is the SOLARIS2
type. So yeah, it's a partition table inside a partition table.


That's not correct, at least not technically.  Solaris *slices* within
the Solaris fdisk partition, are not also known as partitions.  They
are simply known as slices.  By calling them "Solaris partitions or
slices" you are just adding confusion.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Component Naming Requirements

2010-06-07 Thread Cindy Swearingen

Hi--

Pool names must contain alphanumeric characters as described here:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/common/zfs/zfs_namecheck.c

The problem you might be having is probably with an special characters,
such as umlauts or accents (?). Pool names only allow 4 special
characters, as described in this section.

Thanks,

Cindy

On 06/07/10 06:45, eXeC001er wrote:

Hi All!

Can i create pool or dataset with name that contains non-latin letters 
(russian letters, specific germany letters, etc ...)?


I tried to create pool with non-latin letters, but could not.

In ZFS User Guide i see next information:

Each ZFS component must be named according to the following rules:

*

  Empty components are not allowed.

*

  Each component can only contain alphanumeric characters in
  addition to the following four special characters:

  o

Underscore (_)

  o

Hyphen (-)

  o

Colon (:)

  o

Period (.)

*

  Pool names must begin with a letter, except for the following
  restrictions:

  o

The beginning sequence c[0-9] is not allowed

  o

The name log is reserved

  o

A name that begins with mirror, raidz, or spare is not
allowed because these name are reserved.

  In addition, pool names must not contain a percent sign (%)

*

  Dataset names must begin with an alphanumeric character.
  Dataset names must not contain a percent sign (%).


As you can see guide has no information about only-latin letters. 


Thanks.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Cindy Swearingen

Hi Toyama,

You cannot restore an individual file from a snapshot stream like
the ufsrestore command. If you have snapshots stored on your
system, you might be able to access them from the .zfs/snapshot
directory. See below.

Thanks,

Cindy

% rm reallyimportantfile
% cd .zfs/snapshot
% cd recent-snap-dir
% cp reallyimportantfile $HOME


On 06/07/10 08:34, Toyama Shunji wrote:

Can I extract one or more specific files from zfs snapshot stream?
Without restoring full file system.
Like ufs based 'restore' tool.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive showing as "removed"

2010-06-08 Thread Cindy Swearingen

Hi Joe,

The REMOVED status generally means that a device was physically removed
from the system.

If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.

If the device is physically connected, see what cfgadm says about this
device. For example, a device that was unconfigured from the system
would look like  this:

# cfgadm -al | grep c4t2d0
c4::dsk/c4t2d0  disk connectedunconfigured   unknown

(Finding the right cfgadm format for your h/w is another challenge.)

I'm very cautious about other people's data so consider this issue:

If possible, you might import the pool while you are physically
inspecting the device or changing it physically. Depending on your
hardware, I've heard of device paths changing if another device is
reseated or changes.

Thanks,

Cindy

On 06/07/10 17:50, besson3c wrote:

Hello,

I have a drive that was a part of the pool showing up as "removed". I made no 
changes to the machine, and there are no errors being displayed, which is rather weird:

# zpool status nm
  pool: nm
 state: DEGRADED
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
nm  DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  REMOVED  0 0 0


What would your advice be here? What do you think happened, and what is the smartest way to bring this disk back up? Since there are no errors I'm inclined to throw it back into the pool and see what happens rather than trying to replace it straight away. 


Thoughts?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive showing as "removed"

2010-06-08 Thread Cindy Swearingen

Joe,

Yes, the device should resilver when its back online.

You can use the fmdump -eV command to discover when this device was
removed and other hardware-related events to help determine when this
device was removed.

I would recommend exporting (not importing) the pool before physically
changing the hardware. After the device is back online and the pool is
imported, you might need to use zpool clear to clear the pool status.

Thanks,

Cindy

On 06/08/10 11:11, Joe Auty wrote:

Cindy Swearingen wrote:

Hi Joe,

The REMOVED status generally means that a device was physically removed
from the system.

If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.

If the device is physically connected, see what cfgadm says about this
device. For example, a device that was unconfigured from the system
would look like  this:

# cfgadm -al | grep c4t2d0
c4::dsk/c4t2d0  disk connectedunconfigured   unknown

(Finding the right cfgadm format for your h/w is another challenge.)

I'm very cautious about other people's data so consider this issue:

If possible, you might import the pool while you are physically
inspecting the device or changing it physically. Depending on your
hardware, I've heard of device paths changing if another device is
reseated or changes.



Thanks Cindy!

Here is what cfgadm is showing me:

# cfgadm -al | grep c0t7d0
c0::dsk/c0t7d0 disk connectedconfigured   
unknown



I'll definitely start with a reseating of the drive. I'm assuming that 
once Solaris thinks the drive is no longer removed it will start 
leveling on its own?




Thanks,

Cindy

On 06/07/10 17:50, besson3c wrote:

Hello,

I have a drive that was a part of the pool showing up as "removed". I 
made no changes to the machine, and there are no errors being 
displayed, which is rather weird:


# zpool status nm
  pool: nm
 state: DEGRADED
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
nm  DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  REMOVED  0 0 0


What would your advice be here? What do you think happened, and what 
is the smartest way to bring this disk back up? Since there are no 
errors I'm inclined to throw it back into the pool and see what 
happens rather than trying to replace it straight away.
Thoughts? 



--
Joe Auty, NetMusician
NetMusician helps musicians, bands and artists create beautiful, 
professional, custom designed, career-essential websites that are easy 
to maintain and to integrate with popular social networks.

www.netmusician.org <http://www.netmusician.org>
j...@netmusician.org <mailto:j...@netmusician.org>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive showing as "removed"

2010-06-08 Thread Cindy Swearingen

According to this report, I/O to this device caused a probe failure
because the device isn't available on May 31.

I was curious if this device had any previous issues over a longer
period of time.

Failing or faulted drives can also kill your pool's performance.

Thanks,

Cindy

On 06/08/10 11:39, Joe Auty wrote:

Cindy Swearingen wrote:

Joe,

Yes, the device should resilver when its back online.

You can use the fmdump -eV command to discover when this device was
removed and other hardware-related events to help determine when this
device was removed.

I would recommend exporting (not importing) the pool before physically
changing the hardware. After the device is back online and the pool is
imported, you might need to use zpool clear to clear the pool status.



Here is the output of that command, does this reveal anything useful? 
c0t7d0 is the drive that is marked as removed... I'll look into the 
import and export functions to learn more about them. Thanks!



# fmdump -eV
TIME   CLASS
May 31 2010 05:33:36.363381880 ereport.fs.zfs.probe_failure
nvlist version: 0
class = ereport.fs.zfs.probe_failure
ena = 0x5d2206865ac00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x28ebd14a56dfe4df
vdev = 0xdbdc49ecb5479c40
(end detector)

pool = nm
pool_guid = 0x28ebd14a56dfe4df
pool_context = 0
pool_failmode = wait
vdev_guid = 0xdbdc49ecb5479c40
vdev_type = disk
vdev_path = /dev/dsk/c0t7d0s0
vdev_devid = id1,s...@n5000c5001e7cf7a7/a
parent_guid = 0x16cbb2c1f07c5f51
parent_type = raidz
prev_state = 0x0
__ttl = 0x1
__tod = 0x4c038270 0x15a8c478





Thanks,

Cindy

On 06/08/10 11:11, Joe Auty wrote:

Cindy Swearingen wrote:

Hi Joe,

The REMOVED status generally means that a device was physically removed
from the system.

If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.

If the device is physically connected, see what cfgadm says about this
device. For example, a device that was unconfigured from the system
would look like  this:

# cfgadm -al | grep c4t2d0
c4::dsk/c4t2d0  disk connectedunconfigured   
unknown


(Finding the right cfgadm format for your h/w is another challenge.)

I'm very cautious about other people's data so consider this issue:

If possible, you might import the pool while you are physically
inspecting the device or changing it physically. Depending on your
hardware, I've heard of device paths changing if another device is
reseated or changes. 


Thanks Cindy!

Here is what cfgadm is showing me:

# cfgadm -al | grep c0t7d0
c0::dsk/c0t7d0 disk connectedconfigured   
unknown



I'll definitely start with a reseating of the drive. I'm assuming 
that once Solaris thinks the drive is no longer removed it will start 
leveling on its own?




Thanks,

Cindy

On 06/07/10 17:50, besson3c wrote:

Hello,

I have a drive that was a part of the pool showing up as "removed". 
I made no changes to the machine, and there are no errors being 
displayed, which is rather weird:


# zpool status nm
  pool: nm
 state: DEGRADED
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
nm  DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  REMOVED  0 0 0


What would your advice be here? What do you think happened, and 
what is the smartest way to bring this disk back up? Since there 
are no errors I'm inclined to throw it back into the pool and see 
what happens rather than trying to replace it straight away.
Thoughts? 



--
Joe Auty, NetMusician
NetMusician helps musicians, bands and artists create beautiful, 
professional, custom designed, career-essential websites that are 
easy to maintain and to integrate with popular social networks.

www.netmusician.org <http://www.netmusician.org>
j...@netmusician.org <mailto:j...@netmusician.org> 



--
Joe Auty, NetMusician
NetMusician helps musicians, bands and artists create beautiful, 
professional, custom designed, career-essential websites that are easy 
to maintain and to integrate with popular social networks.

www.netmusician.org <http://www.netmusician.org>
j...@netmusician.org <mailto:j...@netmusician.org>


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive showing as "removed"

2010-06-09 Thread Cindy Swearingen

Hi Joe,

I have no clue why this drive was removed, particularly for a one time
failure. I would reconnect/reseat this disk and see if the system
recognizes it. If it resilvers, then you're back in business, but I
would use zpool status and fmdump to monitor this pool and its devices
more often.

A current Solaris system also has the ability to retire a device that
is faulty. You can check this process with fmadm faulty. But I don't
think a one time device failure (May 31), would remove this disk from
service. I'm no device removal expert so maybe someone else will
comment.

Thanks,

Cindy

On 06/08/10 23:56, Joe Auty wrote:

Cindy Swearingen wrote:

According to this report, I/O to this device caused a probe failure
because the device isn't available on May 31.

I was curious if this device had any previous issues over a longer
period of time.

Failing or faulted drives can also kill your pool's performance.

Any idea what happened here? Some weird one time fluky thing? Something 
I ought to be concerned with?



Thanks,

Cindy

On 06/08/10 11:39, Joe Auty wrote:

Cindy Swearingen wrote:

Joe,

Yes, the device should resilver when its back online.

You can use the fmdump -eV command to discover when this device was
removed and other hardware-related events to help determine when this
device was removed.

I would recommend exporting (not importing) the pool before physically
changing the hardware. After the device is back online and the pool is
imported, you might need to use zpool clear to clear the pool status. 


Here is the output of that command, does this reveal anything useful? 
c0t7d0 is the drive that is marked as removed... I'll look into the 
import and export functions to learn more about them. Thanks!



# fmdump -eV
TIME   CLASS
May 31 2010 05:33:36.363381880 ereport.fs.zfs.probe_failure
nvlist version: 0
class = ereport.fs.zfs.probe_failure
ena = 0x5d2206865ac00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x28ebd14a56dfe4df
vdev = 0xdbdc49ecb5479c40
(end detector)

pool = nm
pool_guid = 0x28ebd14a56dfe4df
pool_context = 0
pool_failmode = wait
vdev_guid = 0xdbdc49ecb5479c40
vdev_type = disk
vdev_path = /dev/dsk/c0t7d0s0
vdev_devid = id1,s...@n5000c5001e7cf7a7/a
parent_guid = 0x16cbb2c1f07c5f51
parent_type = raidz
prev_state = 0x0
__ttl = 0x1
__tod = 0x4c038270 0x15a8c478 





Thanks,

Cindy

On 06/08/10 11:11, Joe Auty wrote:

Cindy Swearingen wrote:

Hi Joe,

The REMOVED status generally means that a device was physically 
removed

from the system.

If necessary, physically reconnect c0t7d0 or if connected, check
cabling, power, and so on.

If the device is physically connected, see what cfgadm says about 
this

device. For example, a device that was unconfigured from the system
would look like  this:

# cfgadm -al | grep c4t2d0
c4::dsk/c4t2d0  disk connectedunconfigured   
unknown


(Finding the right cfgadm format for your h/w is another challenge.)

I'm very cautious about other people's data so consider this issue:

If possible, you might import the pool while you are physically
inspecting the device or changing it physically. Depending on your
hardware, I've heard of device paths changing if another device is
reseated or changes. 


Thanks Cindy!

Here is what cfgadm is showing me:

# cfgadm -al | grep c0t7d0
c0::dsk/c0t7d0 disk connected
configured   unknown



I'll definitely start with a reseating of the drive. I'm assuming 
that once Solaris thinks the drive is no longer removed it will 
start leveling on its own?




Thanks,

Cindy

On 06/07/10 17:50, besson3c wrote:

Hello,

I have a drive that was a part of the pool showing up as 
"removed". I made no changes to the machine, and there are no 
errors being displayed, which is rather weird:


# zpool status nm
  pool: nm
 state: DEGRADED
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
nm  DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  REMOVED  0 0 0


What would your advice be here? What do you think happened, and 
what is the smartest way to bring this disk back up? Since there 
are no errors I'm inclined to throw it back into the pool and see 
what happens rather than trying to replace it straight away.
Thoughts? 



--
Joe Auty, NetMusician
NetMusician helps musicians, bands and artists create beautiful, 
p

Re: [zfs-discuss] Add remote disk to zfs pool

2010-06-09 Thread Cindy Swearingen

Hi Alvin,

Which Solaris release is this?

If you are using a OpenSolaris release (build 131), you might consider
the zpool split feature that allows you to clone a mirrored pool by
attaching the HDD to the pool, letting it resilver, and using zpool
split to clone the pool. Then, move the HDD and import the pool on
another system.

You can read about this feature here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs

ZFS Admin Guide, page 89

You might also consider using snapshots to replicate the pool's
contents. This process is covered in the same doc.

Thanks,

Cindy

On 06/09/10 02:53, Alvin Lobo wrote:

Is there a way that i can add one HDD from server A and one HDD from server B 
to a zfs pool so that there is an online snapshot taken at regular intervals. 
hence maintaining a copy on both HDD's.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import issue after a crash

2010-06-11 Thread Cindy Swearingen

Hi Tom,

Did you boot from the OpenSolaris LiveCD and attempt to manually
mount the data3 pool? The import might take some time.

I'm also curious whether the device info is coherent after the
power failure. You might review the device info for the root
pool to confirm.

If the device info is okay, you might consider adding more memory
to get data3 imported. This has helped others in past.

Thanks,

Cindy
On 06/11/10 10:27, Tom Buskey wrote:

My power supply failed.  After I replaced it, I had issues staying up after 
doing zpool import -f.
I reinstalled OpenSolaris 134 on my rpool and still had issues.

I have 5 pools:
rpool - 1*37GB
data - RAIDZ, 4*500GB
data1 - RAID1 2*750GB
data2 - RAID1 2*750GB
data3 - RAID1 2*2TB - WD20EARS

The system locks up everytime I try to import data3.
I even tried exporting all except rpool to reduce the RAM usage.
I have 3 GB RAM with a max of 4GB possible.

I'd like to read the data off data3.  From what I'm reading, the WD EARS are 
probably not the right drives to be using.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import issue after a crash

2010-06-11 Thread Cindy Swearingen

Tom,

If you freshly installed the root pool, then those devices
should be okay so that wasn't a good test. The other pools
should remain unaffected by the install, and I hope, from
the power failure.

We've seen device info get messed up during a power failure,
which is why I asked.

If you don't have dedup enabled on data3, then the memory
should be okay, but increasing memory has helped others in
the past. Its just a suggestion.

Thanks,

Cindy
On 06/11/10 14:44, Tom Buskey wrote:

Hi Tom,

Did you boot from the OpenSolaris LiveCD and attempt
to manually
mount the data3 pool? The import might take some
time.


I haven't tried that.  I am booting from a new install to the hard drive though.


I'm also curious whether the device info is coherent
after the
power failure. You might review the device info for
the root
pool to confirm.



Wouldn't that be ok with a fresh install?


If the device info is okay, you might consider adding
more memory
to get data3 imported. This has helped others in
past.



I've thought of that.  I think the motherboard can only go to 4GB though.

That's why I exported the other zpools - to free up RAM.

The "rule" is 1GB/TB right?  I have about 4.5 TB with 3 GB RAM so I'm a bit 
over that rule.




Thanks,

Cindy
On 06/11/10 10:27, Tom Buskey wrote:

My power supply failed.  After I replaced it, I had

issues staying up after doing zpool import -f.

I reinstalled OpenSolaris 134 on my rpool and still

had issues.

I have 5 pools:
rpool - 1*37GB
data - RAIDZ, 4*500GB
data1 - RAID1 2*750GB
data2 - RAID1 2*750GB
data3 - RAID1 2*2TB - WD20EARS

The system locks up everytime I try to import

data3.

I even tried exporting all except rpool to reduce

the RAM usage.

I have 3 GB RAM with a max of 4GB possible.

I'd like to read the data off data3.  From what I'm

reading, the WD EARS are probably not the right
drives to be using.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to Install 2009.06 on BigAdmin Approved MOBO - FILE SYSTEM FULL

2010-06-14 Thread Cindy Swearingen

Hi Giovanni,

My Monday morning guess is that the disk/partition/slices are not
optimal for the installation.

Can you provide the partition table on the disk that you are attempting 
to install? Use format-->disk-->partition-->print.


You want to put all the disk space in c*t*d*s0. See this section of the 
ZFS troubleshooting guide for an example of fixing the disk/partition/

slice issues:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

Thanks,

Cindy


On 06/13/10 17:42, Giovanni wrote:

Hi Guys

I am having trouble installing Opensolaris 2009.06 into my Biostar Tpower I45 motherboard, approved on BigAdmin HCL here: 


http://www.sun.com/bigadmin/hcl/data/systems/details/26409.html -- why is it 
not working?

My setup:
3x 1TB hard-drives SATA 
1x 500GB hard-drive (I have only left this hdd connected to try to isolate the issue, still happens)

4GB DDR2 PC2-6400 Ram (tested GOOD!)
ATI Radeon 4650 512MB DDR2 PCI-E 16x
Motherboard default settings/CMOS cleared

Here's what happens: Opensolaris boot options come up, I choose the first default 
"OPensolaris 2009.06" -- I HAVE ALSO TRIED VESA DRIVES and Command line, all of 
these fail.
-

After Select desktop language, 

configuring devices. 
Mounting cdroms 
Reading ZFS Config: done.


opensolaris console login: (cd rom is still being accessed at this time).. few 
seconds later:

then opensolaris ufs: NOTICE: alloc: /: file system full
opensolaris last message repeated 1 time
opensolaris syslogd: /var/adm/messages: No space left on device
opensolaris in.routed[537]: route 0.0.0.0/8 -> 0.0.0.0 nexthop is not directly 
connected

---

I logged in as jack / jack on the console and did a df -h

/devices/ramdisk:a = size 164M 100% used mount /
swap 3.3GB used 860K 1%
/mnt/misc/opt 210MB used 210M 100% /mnt/misc

/usr/lib/libc/libc_hwcap1.so.1 2.3G used 2.3G 100% /lib/libc.so.1

/dev/dsk/c7t0d0s2 677M used 677M 100% /media/OpenSolaris

Thanks for any help!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Data Loss on system crash/upgrade

2010-06-14 Thread Cindy Swearingen
> Hello all,
> 
> I've been running OpenSolaris on my personal
> fileserver for about a year and a half, and it's been
> rock solid except for having to upgrade from 2009.06
> to a dev version to fix some network driver issues.
> About a month ago, the motherboard on this computer
> died, and I upgraded to a better motherboard and
> processor.  This move broke the OS install, and
> instead of bothering to try to figure out how to fix
> it, I decided on a reinstall.  All my important data
> (including all my virtual hard drives) are stored on
>  a separate 3 disk raidz pool.  
> 
> In attempting to import the pool, I realized that I
> had upgraded the zpool to a newer version than is
> supported in the live CD, so I installed the latest
> dev release to allow the filesystem to mount.  After
> mounting the drives (with a zpool import -f), I
> noticed that some files might be missing.  After
> installing virtualbox and booting up a WinXP VM, this
> issue was confirmed.
> 
> Files before 2/10/2010 seem to be unharmed, but the
> next file I have logged on 2/19/2010 is missing.
> Every file created after this date is also missing.
> The machine had been rebooted several times before
> the crash with no issues.  For the week or so prior
> to the machine finally dying for good, it would
> boot, last a few hours, and then crash.  These files
>  were fine during that period.
> 
> One more thing of note:  when the machine suffered
> critical hardware failure, the zpool in issue was at
> about 95% full.  When I upgraded to new hardware
> (after updating the machine), I added two mirrored
> disks to the pool to alleviate the space issue until
> I could back everything up, destroy the pool, and
> recreate it with six disks instead of three.
> 
> Is this a known bug with a fix, or am I out of luck
> with these files?
> 
> Thanks,
> Austin

Austin,

If your raidz pool  with important data was damaged in some way by the
hardware failures, then a recovery mechanism in recent builds is to discard
the last few transactions to get the pool back to a good known state.
You would have seen messages regarding this recovery. 

We might be able to see if this recovery happened if you could provide 
your zpool history output for this pool.

Generally, a few seconds of data transactions are lost, not all of the data
after a certain date.

Another issue is that VirtualBox doesn't honor cache flushes by default
so if the system is crashing with data in play, your data might not be
safely written to disk.

Thanks,

Cindy
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Data Loss on system crash/upgrade

2010-06-15 Thread Cindy Swearingen

Hi Austin,

No much help, as it turns out.

I don't see any evidence that a recovery mechanism, where you might lose 
a few seconds of data transactions, was triggered.


It almost sounds like your file system was rolled back to a previous
snapshot because the data is lost as of a certain date. I don't see any
evidence of a rollback either.

I'm stumped at this point but maybe someone else has ideas.

Is it possible that hardware failures caused the outright removal
of all data after a certain date (?) Doesn't seem possible.

You can review how the critical hardware failures were impacting
your ZFS pools by reviewing the contents of fmdump -eV. Its a lot of
output to sort through but looking for checksum errors and other
problems. Still, ongoing checksum errors would result in data
corruption, possibly, but not total loss of data after a certain
date.

Can you recover your data from your existing snapshots?

Cindy

On 06/14/10 21:44, Austin Rotondo wrote:

Cindy,

The log is quite long, so I've attached a text file of the command output.

The last command in the log before the system crash was:

2010-04-07.20:09:06 zpool scrub zarray1

The system crashed sometime after 4/20/10, which is the last file I have record 
of creating.
I looked through the log and didn't see anything unusual, but I'm definitely no 
expert on the subject.

Thanks for your help,
Austin




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disable ZFS ACL

2010-06-16 Thread Cindy Swearingen

Hi--

No way exists to outright disable ACLs on a ZFS file system.

The removal of the aclmode property was a recent dev build change.
The zfs.1m man page you cite is for a Solaris release that is no longer
available and will be removed soon.

What are you trying to do? You can remove specific ACLs by using
syntax similar to the example below.

Thanks,

Cindy

# /usr/bin/ls -dv test.dir
drwxr-xr-x+  3 root root   3 Jun 15 10:41 test.dir

0:user:gozer:list_directory/read_data/add_file/write_data/execute:allow
 1:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
 /append_data/read_xattr/write_xattr/execute/read_attributes
 /write_attributes/read_acl/write_acl/write_owner/synchronize:allow
 2:group@:list_directory/read_data/read_xattr/execute/read_attributes
 /read_acl/synchronize:allow

3:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
 /read_acl/synchronize:allow

# /usr/bin/chmod A0- test.dir
# /usr/bin/ls -dv test.dir
drwxr-xr-x   3 root root   3 Jun 15 10:41 test.dir
 0:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
 /append_data/read_xattr/write_xattr/execute/read_attributes
 /write_attributes/read_acl/write_acl/write_owner/synchronize:allow
 1:group@:list_directory/read_data/read_xattr/execute/read_attributes
 /read_acl/synchronize:allow

2:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
 /read_acl/synchronize:allow


On 06/16/10 08:49, eXeC001er wrote:

Hi All.

Can you explain to me hot to disable ACL on ZFS ? 

'aclmode' prop does not exists in props of zfs dataset, but this prop on 
the zfs 
man( http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?l=en&a=view&q=zfs 
 )


Thanks.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mount zfs boot disk on another server?

2010-06-16 Thread Cindy Swearingen

Hi Jay,

I think you mean you want to connect the disk with a potentially damaged 
ZFS BE on another system and mount the ZFS BE for possible repair

purposes.

This recovery method is complicated by the fact that changing the root
pool name can cause the original system not to boot.

Other potential options that don't involve moving disks are:

1. Boot from a local or shared CD or netinstall server in single-user
mode and import the local root pool.

2. On a Solaris 10 system, you can attempt to boot failsafe mode
and roll back the ZFS BE snapshot.

3. Extensive disaster recovery:

- Send a remote copy of the ZFS BE snapshot back to the local system
- Create a flash archive of the root pool (Solaris 10)

Thanks,

Cindy


On 06/16/10 11:24, Jay Seaman wrote:

I have multiple servers, all with the same configuration of mirrored zfs root pools. I've 
been asked how to take a potentially "damaged" disk from one machine and carry 
it to another machine, in the event that some hw failure prevents fixing a boot problem 
in place. So we have one half of mirror of rpool that we want to take to another system 
that already has an rpool and we want to import the second rpool. The second system has 
enough disk slots to add the third disk.

I believe the answer is to use

zpool import

to get the id number of the non-native rpool

then use 
zpool import -f  -R /mnt newpool


but I don't really have a way to check this...

Thanks

Jay

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs iostat - which unit bit vs. byte

2010-06-17 Thread Cindy Swearingen

Hi--

ZFS command operations involving disk space take input and display using
numeric values specified as exact values, or in a human-readable form
with a suffix of B, K, M, G, T, P, E, Z for bytes, kilobytes, megabytes, 
gigabytes, terabytes, petabytes, exabytes, or zettabytes.


Thanks,

Cindy


On 06/17/10 05:42, pitutek wrote:

Guys,

# zpool iostat pool1
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
pool1822M   927G  0  0435  28.2K


In which units is bandwidth measured?
I suppose capital K means Byte but Im not sure.
Anyway abbreviation for mega is only capital M.

Docs just say:
WRITE BANDWIDTH The bandwidth of all write operations, expressed as units per 
second.

Thanks!

/M

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool is wrong size in b134

2010-06-17 Thread Cindy Swearingen

Hi Ben,

Any other details about this pool, like how it might be different from 
the other two pools on this system, might be helpful...


I'm going to try to reproduce this problem.

We'll be in touch.

Thanks,

Cindy

On 06/17/10 07:02, Ben Miller wrote:
I upgraded a server today that has been running SXCE b111 to the 
OpenSolaris preview b134.  It has three pools and two are fine, but one 
comes up with no space available in the pool (SCSI jbod of 300GB disks). 
The zpool version is at 14.


I tried exporting the pool and re-importing and I get several errors 
like this both exporting and importing:


# zpool export pool1
WARNING: metaslab_free_dva(): bad DVA 0:645838978048
WARNING: metaslab_free_dva(): bad DVA 0:645843271168
...

I tried removing the zpool.cache file, rebooting, importing and receive 
no warnings, but still reporting the wrong avail and size.


# zfs list pool1
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool1   396G  0  3.22M  /export/home
# zpool list pool1
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
pool1   476G   341G   135G71%  1.00x  ONLINE  -
# zpool status pool1
  pool: pool1
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1ONLINE   0 0 0
  raidz2-0   ONLINE   0 0 0
c1t8d0   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
c1t12d0  ONLINE   0 0 0
c1t13d0  ONLINE   0 0 0
c1t14d0  ONLINE   0 0 0

errors: No known data errors

I try exporting and again get the metaslab_free_dva() warnings.  
Imported again with no warnings, but same numbers as above.  If I try to 
remove files or truncate files I receive no free space errors.


I reverted back to b111 and here is what the pool really looks like.

# zfs list pool1
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool1   396G   970G  3.22M  /export/home
# zpool list pool1
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
pool1  1.91T   557G  1.36T28%  ONLINE  -
# zpool status pool1
  pool: pool1
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool1ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c1t8d0   ONLINE   0 0 0
c1t9d0   ONLINE   0 0 0
c1t10d0  ONLINE   0 0 0
c1t11d0  ONLINE   0 0 0
c1t12d0  ONLINE   0 0 0
c1t13d0  ONLINE   0 0 0
c1t14d0  ONLINE   0 0 0

errors: No known data errors

Also, the disks were replaced one at a time last year from 73GB to 300GB 
to increase the size of the pool.  Any idea why the pool is showing up 
as the wrong size in b134 and have anything else to try?  I don't want 
to upgrade the pool version yet and then not be able to revert back...


thanks,
Ben

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Cindy Swearingen

P.S.

User/group quotas are available in the Solaris 10 release,
starting in the Solaris 10 10/09 release:

http://docs.sun.com/app/docs/doc/819-5461/gazvb?l=en&a=view

Thanks,

Cindy

On 06/18/10 07:09, David Magda wrote:

On Fri, June 18, 2010 08:29, Sendil wrote:


I can create 400+ file system for each users,
but will this affect my system performance during the system boot up?
Is this recommanded or any alternate is available for this issue.


You can create a dataset for each user, and then set a per-dataset quota
for each one:


quota=size | none

Limits the amount of space a dataset and its descendents can
consume. This property enforces a hard limit on the amount of
space used. This includes all space consumed by descendents,
including file systems and snapshots. Setting a quota on a
descendent of a dataset that already has a quota does not
override the ancestor's quota, but rather imposes an additional
limit.


Or, on newer revisions of ZFS, you can have one big data set and put all
your users in there, and then set per-user quotas:


userqu...@user=size | none

Limits the amount of space consumed by the specified
user. Similar to the refquota property, the userquota space
calculation does not include space that is used by descendent
datasets, such as snapshots and clones. User space consumption
is identified by the usersp...@user property.


There's also a "groupquota". See zfs(1M) for details:

   http://docs.sun.com/app/docs/doc/819-2240/zfs-1m

Availability of "userquota" depends on the version of (Open)Solaris that
you have; don't recall when it was introduced.

As for which one is better, that depends: per-user adds flexibility, but a
bit of overhead. Best to test things out for yourself to see if it works
in your environment.

You could always split things up into groups of (say) 50. A few jobs ago,
I was in an environment where we have a /home/students1/ and
/home/students2/, along with a separate faculty/ (using Solaris and UFS).
This had more to do with IOps than anything else.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Cindy Swearingen

Hi Curtis,

You might review the ZFS best practices info to help you determine
the best pool configuration for your environment:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

If you're considering using dedup, particularly on a 24T pool, then
review the current known issues, described here:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup

Thanks,

Cindy

On 06/18/10 02:52, Curtis E. Combs Jr. wrote:

I am new to zfs, so I am still learning. I'm using zpool iostat to
measure performance. Would you say that smaller raidz2 sets would give
me more reliable and better performance? I'm willing to give it a
shot...

On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen  wrote:

On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:

Well, I've searched my brains out and I can't seem to find a reason for this.

I'm getting bad to medium performance with my new test storage device. I've got 
24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca 
raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM 
OpenSolaris upgraded to snv_134.

The zpool has 2 11-disk raidz2's and I'm getting anywhere between 1MB/sec to 
40MB/sec with zpool iostat. On average, though it's more like 5MB/sec if I 
watch while I'm actively doing some r/w. I know that I should be getting better 
performance.


How are you measuring the performance?
Do you understand raidz2 with that big amount of disks in it will give you 
really poor random write performance?

-- Pasi


I'm new to OpenSolaris, but I've been using *nix systems for a long time, so if 
there's any more information that I can provide, please let me know. Am I doing 
anything wrong with this configuration? Thanks in advance.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Cindy Swearingen

If the device driver generates or fabricates device IDs, then moving
devices around is probably okay.

I recall the Areca controllers are problematic when it comes to moving
devices under pools. Maybe someone with first-hand experience can
comment.

Consider exporting the pool first, moving the devices around, and
importing the pool.

Moving devices under pool is okay for testing but in general, I don't
recommend moving devices around under pools.

Thanks,

Cindy

On 06/18/10 14:29, artiepen wrote:

Thank you, all of you, for the super helpful responses, this is probably one of 
the most helpful forums I've been on. I've been working with ZFS on some 
SunFires for a little while now, in prod, and the testing environment with oSol 
is going really well. I love it. Nothing even comes close.

If you have time, I have one more question. We're going to try it now with 2 
12-port Arecas. When I pop the controllers in and reconnect the drives, does 
ZFS has the intelligence to adjust if I use the same hard drives? Of coarse, it 
doesn't matter, I can just destroy the pool and recreate. I'm just curious if 
that'd work.

Thanks again!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Many checksum errors during resilver.

2010-06-21 Thread Cindy Swearingen

Hi Justin,

This looks like an older Solaris 10 release. If so, this looks like
a zpool status display bug, where it looks like the checksum errors
are occurring on the replacement device, but they are not.

I would review the steps described in the hardware section of the ZFS
troubleshooting wiki to confirm that the new disk is working as
expected:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Then, follow steps in the Notify FMA That Device Replacement is Complete
section to reset FMA. Then, start monitoring the replacement device
with fmdump to see if any new activity occurs on this device.

Thanks,

Cindy


On 06/21/10 10:21, Justin Daniel Meyer wrote:

I've decided to upgrade my home server capacity by replacing the disks in one 
of my mirror vdevs.  The procedure appeared to work out, but during resilver, a 
couple million checksum errors were logged on the new device. I've read through 
quite a bit of the archive and searched around a bit, but can not find anything 
definitive to ease my mind on whether to proceed.


SunOS deepthought 5.10 Generic_142901-13 i86pc i386 i86pc

  pool: tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.00% done, 691h28m to go
config:

NAME  STATE READ WRITE CKSUM
tank  DEGRADED 0 0 0
  mirror  DEGRADED 0 0 0
replacing DEGRADED   215 0 0
  c1t6d0s0/o  FAULTED  0 0 0  corrupted data
  c1t6d0  ONLINE   0 0   215  3.73M resilvered
c1t2d0ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t1d0ONLINE   0 0 0
c1t5d0ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0ONLINE   0 0 0
c1t4d0ONLINE   0 0 0
logs
  c8t1d0p1ONLINE   0 0 0
cache
  c2t1d0p2ONLINE   0 0 0


During the resilver, the cache device and the zil were both removed for errors 
(1-2k each).  (Despite the c2/c8 discrepancy, they are partitions on the same 
OCZvertexII device.)


# zpool status -xv tank
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver completed after 9h20m with 0 errors on Sat Jun 19 22:07:27 2010
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  mirrorONLINE   0 0 0
c1t6d0  ONLINE   0 0 2.69M  539G resilvered
c1t2d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
logs
  c8t1d0p1  REMOVED  0 0 0
cache
  c2t1d0p2  REMOVED  0 0 0

I cleared the errors (about 5000/GB resilvered!), removed the cache device, and 
replaced the zil partition with the whole device.  After 3 pool scrubs with no 
errors, I want to check with someone else that it appears okay to replace the 
second drive in this mirror vdev.  The one thing I have not tried is a large 
file transfer to the server, as I am also dealing with an NFS mount problem 
which popped up suspiciously close to my most recent patch update.


# zpool status -v tank
  pool: tank
 state: ONLINE
 scrub: scrub completed after 3h26m with 0 errors on Mon Jun 21 01:45:00 2010
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t0d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
logs
  c0t0d0ONLINE   0 0 0

errors: No known data errors


/var/adm/messages is positively over-run with these triplets/quadruplets, not all of 
which end which end up as "fatal" type.


Jun 19 21:43:19 deepthought scsi: [ID 107833 kern.warning] WARNING: 
/p

Re: [zfs-discuss] c5->c9 device name change prevents beadm activate

2010-06-23 Thread Cindy Swearingen



On 06/23/10 10:40, Evan Layton wrote:

On 6/23/10 4:29 AM, Brian Nitz wrote:

I saw a problem while upgrading from build 140 to 141 where beadm
activate {build141BE} failed because installgrub failed:

# BE_PRINT_ERR=true beadm activate opensolarismigi-4
be_do_installgrub: installgrub failed for device c5t0d0s0.
Unable to activate opensolarismigi-4.
Unknown external error.

The reason installgrub failed is that it is attempting to install grub
on c5t0d0s0 which is where my root pool is:
# zpool status
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 5h3m with 0 errors on Tue Jun 22 22:31:08 2010
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0

errors: No known data errors

But the raw device doesn't exist:
# ls -ls /dev/rdsk/c5*
/dev/rdsk/c5*: No such file or directory

Even though zfs pool still sees it as c5, the actual device seen by
format is c9t0d0s0


Is there any workaround for this problem? Is it a bug in install, zfs or
somewhere else in ON?



In this instance beadm is a victim of the zpool configuration reporting
the wrong device. This does appear to be a ZFS issue since the device
actually being used is not what zpool status is reporting. I'm forwarding
this on to the ZFS alias to see if anyone has any thoughts there.

-evan


Hi Evan,

I suspect that some kind of system, hardware, or firmware event changed
this device name. We could identify the original root pool device with
the zpool history output from this pool.

Brian, you could boot this system from the OpenSolaris LiveCD and
attempt to import this pool to see if that will update the device info
correctly.

If that doesn't help, then create /dev/rdsk/c5* symlinks to point to
the correct device.

Thanks,

Cindy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs failsafe pool mismatch

2010-06-24 Thread Cindy Swearingen

Hi Shawn,

I think this can happen if you apply patch 141445-09.
It should not happen in the future.

I believe the workaround is this:

1. Boot the system from the correct media.

2. Install the boot blocks on the root pool disk(s).

3. Upgrade the pool.

Thanks,

Cindy

On 06/24/10 09:24, Shawn Belaire wrote:

I have a customer that described this issue to me in general terms.

I'd like to know how to replicated it, and what the best practice is to a avoid 
the issue, or fix it in an accepted manner.

If they kernel patch, and reboot they may get messages informing them that the 
pool version is down rev'd.  If they act on the message and upgrade the pool 
version, then have to boot from the failsafe it fails as that kernel does not 
support that pool version.

What would be a way to fix this,  and should we allow they catch to even happen?

Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root recovery SMI/EFI label weirdness

2010-06-25 Thread Cindy Swearingen

Sean,

If you review the doc section you included previously, you will see
that all the root pool examples include slice 0.

The slice is a long-standing boot requirement and is described in
the boot chapter, in this section:

http://docs.sun.com/app/docs/doc/819-5461/ggrko?l=en&a=view

ZFS Storage Pool Configuration Requirements

The pool must exist either on a disk slice or on disk slices that are
mirrored.

Thanks,

Cindy

On 06/25/10 05:44, Sean . wrote:

I've discovered the source of the problem.

zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0

It seems a root pool must only be created on a slice. Therefore 


zpool create -f -o failmode=continue -R /a -m legacy -o 
cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0

will work. I've been reading through some of the ZFS root installation stuff 
and can't find a note that explicitly states this although a bit of bing'ing 
and I found a thread that confirmed this.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-25 Thread Cindy Swearingen

Tiernan,

Hardware redundancy is important, but I would be thinking about how you
are going to back up data in the 6-24 TB range, if you actually need
that much space.

Balance your space requirements with good redundancy and how much data
you can safely back up because stuff happens: hardware fails, power
fails, and you can lose data.

More suggestions:

1. Test some configs for your specific data/environment.

2. Start with smaller mirrored pools, which offer redundancy, good
performance, and more flexibility.

With a SAN, I would assume you are using multiple systems. Did you mean
meta centre or media centre?

3. Consider a mirrored source pool and then create snapshots that you
send to a mirrored backup pool on another system. Mirrored pools can be
easily expanded when you need more space.

4. If you are running a recent OpenSolaris build, you could use the
zpool split command to attach and detach disks from your source pool to
replicate it on another system, in addition to doing more regular
snapshots of source data.

Thanks,

Cindy

On 06/25/10 13:26, Tiernan OToole wrote:

Good morning all.

 


This question has probably poped up before, but maybe not in this exact way…

 

I am planning on building a SAN for my home meta centre, and have some 
of the raid cards I need for the build. I will be ordering the case 
soon, and then the drives. The cards I have are 2 8 port PXI-Express 
cards (A dell Perc 5 and a Adaptec card…). The case will have 20 hot 
swap SAS/SATA drives, and I will be adding a third RAID controller to 
allow the full 20 drives.


 

I have read something about trying to setup redundancy with the RAID 
controllers, so having zpools spanning multiple controllers. Given I 
won’t be using the on-board RAID features of the cards, I am wondering 
how this should be setup…


 

I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a 
controller and not lose any data from the pools… But is this theory 
correct? If I were to use 2Tb drives, each zpool would be 10Tb RAW and 
6TB useable… giving me a total of 40Tb RAW and 24Tb usable…


 


Is this over kill? Should I be worrying about losing a controller?

 


Thanks in advance.

 


--Tiernan




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilvering onto a spare - degraded because of read and cksum errors

2010-06-28 Thread Cindy Swearingen

Hi Donald,

I think this is just a reporting error in the zpool status output,
depending on what Solaris release is.

Thanks,

Cindy

On 06/27/10 15:13, Donald Murray, P.Eng. wrote:

Hi,

I awoke this morning to a panic'd opensolaris zfs box. I rebooted it
and confirmed it would panic each time it tried to import the 'tank'
pool. Once I disconnected half of one of the mirrored disks, the box
booted cleanly and the pool imported without a panic.

Because this box has a hot spare, it began resilvering automatically.
This is the first time I've resilvered to a hot spare, so I'm not sure
whether the output below [1]  is normal.

In particular, I think it's odd that the spare has an equal number of
read and cksum errors. Is this normal? Is my spare a piece of junk,
just like the disk it replaced?


[1]
r...@weyl:~# zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scrub: resilver in progress for 3h42m, 97.34% done, 0h6m to go
config:

NAME   STATE READ WRITE CKSUM
tank   DEGRADED 0 0 0
  mirror   DEGRADED 0 0 0
spare  DEGRADED 1.36M 0 0
  9828443264686839751  UNAVAIL  0 0 0  was
/dev/dsk/c6t1d0s0
  c7t1d0   DEGRADED 0 0 1.36M  too many errors
c9t0d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c7t0d0 ONLINE   0 0 0
c5t1d0 ONLINE   0 0 0
spares
  c7t1d0   INUSE currently in use

errors: No known data errors
r...@weyl:~#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   3   4   5   6   7   8   >