Re: [zfs-discuss] ZFS mirror resilver process

2009-10-18 Thread Adam Mellor
I Too have seen this problem.

I had done a zfs send from my main pool "terra" (6 disk raidz on seagate 1TB 
drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded after a while 
(~1 week) with one of the mirror disks constantly re-silvering (40 TB 
resilvered on a 1TB disk) something was fishy.

I removed the disk that was getting the re-silver and replaced it with another 
WD Green 1TB (factory new) and added it as a mirror to the pool again it 
re-silvered successfully. i performed a scrub the next day (couple of reboots 
etc) and it started re-silvering the replaced drive.

I still had most of the data in the original pool, i performed a md5sum against 
some of the original files (~20GB files) and the ex-mirror copy and the md5 
sums came back the same.

I have since blown away the ex-mirror and re-created the zpool mirror and 
copied the data back.

i have not seen this occur since the new zpool.

I have been running the dev build on 2010.1
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirror resilver process

2009-10-18 Thread Toby Thain


On 18-Oct-09, at 6:41 AM, Adam Mellor wrote:


I Too have seen this problem.

I had done a zfs send from my main pool "terra" (6 disk raidz on  
seagate 1TB drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded  
after a while (~1 week) with one of the mirror disks constantly re- 
silvering (40 TB resilvered on a 1TB disk) something was fishy.


I removed the disk that was getting the re-silver and replaced it  
with another WD Green 1TB (factory new) and added it as a mirror to  
the pool again it re-silvered successfully. i performed a scrub the  
next day (couple of reboots etc) and it started re-silvering the  
replaced drive.


I still had most of the data in the original pool, i performed a  
md5sum against some of the original files (~20GB files) and the ex- 
mirror copy and the md5 sums came back the same.


This doesn't test much; ZFS will use whichever side of the mirror is  
good.


--Toby



I have since blown away the ex-mirror and re-created the zpool  
mirror and copied the data back.

...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots & rsync --delete

2009-10-18 Thread Sander Smeenk
Quoting A Darren Dunham (ddun...@taos.com):

> > i noticed rsync -removes- snapshots even though i am not able to do
> > so myself, even as root, with plain /bin/rm.
> I never liked this interface.  I want snapshots to be immutable to
> operations within the filesystem itself.

Well, thats what i would expect too. It seems strange that you can't
edit or remove singular files from snapshots though you can rmdir
acomplete snapshot 'directory' at once. Is this by design? I call it a
bug. ;)

> /bin/rmdir on the other hand did do a rmdir(2), and that does destroy
> the snapshot.

It seems to me that the 'base directory' or 'mountpoint' of the
snapshot, e.g. .zfs/snapshot/ is not immutable, as all files and
directories inside the snapshot are:

r...@host:/backup/host/.zfs/snapshot/2009-10-14# rmdir bin
rmdir: bin: Read-only file system

> Although I see sometimes the rmdir fails.  If I've been doing
> "something" in a snapshot (sometimes some stats are enough), then it
> seems to "lock" the snapshot and the rmdir fails (with EBUSY).

Hmm. I haven't noticed such behaviour myself.  I guess i'm gonna switch
off snapdir visibility while rsync runs, or something.

Regards,
-Sndr.
-- 
| So this magician is walking down the street and turns into a grocery store.
| 4096R/6D40 - 1A20 B9AA 87D4 84C7  FBD6 F3A9 9442 20CC 6CD2
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The ZFS FAQ needs an update

2009-10-18 Thread Sriram Narayanan
All:

Given that the latest S10 update includes user quotas, the FAQ here
[1] may need an update

-- Sriram

[1] http://opensolaris.org/os/community/zfs/faq/#zfsquotas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots & rsync --delete

2009-10-18 Thread dick hoogendijk
On Sun, 2009-10-18 at 18:12 +0200, Sander Smeenk wrote:

> Well, thats what i would expect too. It seems strange that you can't
> edit or remove singular files from snapshots [...]

That would make the snapshot not a snapshot anymore. There would be
differences..

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots & rsync --delete

2009-10-18 Thread Sander Smeenk
Quoting dick hoogendijk (d...@nagual.nl):

> > Well, thats what i would expect too. It seems strange that you can't
> > edit or remove singular files from snapshots [...]
> That would make the snapshot not a snapshot anymore. There would be
> differences..

I'm well aware of that. You're replying to just a part of my sentence.

I tried to indicate that it's strange that rmdir works on the snapshot
directory while files inside snapshots are immutable.

This, to me, is a bug.

-Sndr.
-- 
| Did you hear about the cat that ate a ball of wool?  --  It got mittens.
| 4096R/6D40 - 1A20 B9AA 87D4 84C7  FBD6 F3A9 9442 20CC 6CD2
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots & rsync --delete

2009-10-18 Thread Chris Kirby

On Oct 18, 2009, at 11:37 AM, Sander Smeenk wrote:


I tried to indicate that it's strange that rmdir works on the snapshot
directory while files inside snapshots are immutable.

This, to me, is a bug.


If you have a snapshot named "p...@snap", this:

# rmdir /pool/.zfs/snapshot/snap

is equivalent to this:

# zfs destroy p...@snap

Similarly, this:

# mkdir /pool/.zfs/snapshot/snap

is equivalent to this:

# zfs snapshot p...@snap

This can be very handy if you want to create or destroy
a snapshot from an NFS client, for example.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots & rsync --delete

2009-10-18 Thread Sander Smeenk
Quoting Chris Kirby (chris.ki...@sun.com):

> If you have a snapshot named "p...@snap", this:
> # rmdir /pool/.zfs/snapshot/snap
> is equivalent to this:
> # zfs destroy p...@snap
>
> Similarly, this:
> # mkdir /pool/.zfs/snapshot/snap
> is equivalent to this:
> # zfs snapshot p...@snap
>
> This can be very handy if you want to create or destroy
> a snapshot from an NFS client, for example.

Oh, right. This is where my newbieness kicks in. I didn't know just
doing a mkdir in the snapshot directory actually -creates- a snapshot.
Thanks for this clarification.

I think i will toggle snapdir=visible/hidden to overcome this problem,
or add it to --excludes perhaps.

Thanks!
-Sndr.
-- 
| If you jog backwards, will you gain weight?
| 4096R/6D40 - 1A20 B9AA 87D4 84C7  FBD6 F3A9 9442 20CC 6CD2
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool wont get back online

2009-10-18 Thread Jonas Nordin
hi,

After a shutdown my zpool wont go online again, zpool status showed that only 
one of five hard drives is online. I tried to export the pool and get it back 
in hope of a fix but with no change.
I have replaced the sata cables and even replaced the motherboard but it's 
always showes the same status of the zpool.

#zpool import will show

  pool: tank
id: 6529188950165676222
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

tankUNAVAIL  insufficient replicas
  raidz1UNAVAIL  insufficient replicas
c7t1d0  UNAVAIL  cannot open
c6t1d0  ONLINE
c6t0d0  UNAVAIL  cannot open
c5t1d0  UNAVAIL  cannot open
c7t0d0  UNAVAIL  cannot open

#format shows

c5t1d0: configured with capacity of 698.60GB
c6t0d0: configured with capacity of 698.60GB
c7t0d0: configured with capacity of 698.60GB
c7t1d0: configured with capacity of 698.60GB


AVAILABLE DISK SELECTIONS:
   0. c5t0d0 
  /p...@0,0/pci1043,8...@5/d...@0,0
   1. c5t1d0 
  /p...@0,0/pci1043,8...@5/d...@1,0
   2. c6t0d0 
  /p...@0,0/pci1043,8...@5,1/d...@0,0
   3. c6t1d0 
  /p...@0,0/pci1043,8...@5,1/d...@1,0
   4. c7t0d0 
  /p...@0,0/pci1043,8...@5,2/d...@0,0
   5. c7t1d0 
  /p...@0,0/pci1043,8...@5,2/d...@1,0

I find the format a bit strange since it lists the capacity of the four missing 
zpool drives but not the hard drive that is online.

Any take on how I can fix this or am I screwed?

Jonas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool wont get back online

2009-10-18 Thread Jonas Nordin
hi,

After a shutdown my zpool wont go online again, zpool status showed that only 
one of five hard drives is online. I tried to export the pool and get it back 
in hope of a fix but with no change.
I have replaced the sata cables and even replaced the motherboard but it's 
always showes the same status of the zpool.

#zpool import will show

  pool: tank
id: 6529188950165676222
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

tankUNAVAIL  insufficient replicas
  raidz1UNAVAIL  insufficient replicas
c7t1d0  UNAVAIL  cannot open
c6t1d0  ONLINE
c6t0d0  UNAVAIL  cannot open
c5t1d0  UNAVAIL  cannot open
c7t0d0  UNAVAIL  cannot open

#format shows

c5t1d0: configured with capacity of 698.60GB
c6t0d0: configured with capacity of 698.60GB
c7t0d0: configured with capacity of 698.60GB
c7t1d0: configured with capacity of 698.60GB


AVAILABLE DISK SELECTIONS:
   0. c5t0d0 DEFAULT cyl 30512 alt 2 hd 255 sec 63
  /p...@0,0/pci1043,8...@5/d...@0,0
   1. c5t1d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5/d...@1,0
   2. c6t0d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5,1/d...@0,0
   3. c6t1d0 ATA-WDC WD7500AAKS-0-4G30-698.64GB
  /p...@0,0/pci1043,8...@5,1/d...@1,0
   4. c7t0d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5,2/d...@0,0
   5. c7t1d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5,2/d...@1,0

I find the format a bit strange since it lists the capacity of the four missing 
zpool drives but not the hard drive that is online.

Any take on how I can fix this or am I screwed?

Jonas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iscsi/comstar performance

2009-10-18 Thread Frank Middleton

On 10/13/09 18:35, Albert Chin wrote:


Maybe this will help:
   
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html


Well, it does seem to explain the scrub problem. I think it might
also explain the slow boot and startup problem - the VM only has
564M available, and it is paging a bit. Doing synchronous i/o for
swap makes no sense. Is there an official way to disable this
behavior?

Does anyone know if the old iscsi system is going to stay around,
or will COMSTAR replace it at some point? The 64K metadata
block at the start of each volume is a bit awkward, too. - it seems
to throw VBox into a tizzy when (failing to) boot MSWXP.

The options seem to be

a) stay with the old method and hope it remains supported

b) figure out a way around the COMSTAR limitations

c) give up and use NFS

Using ZFS as an iscsi backing store for VirtualBox images seemed
like a great idea, so simple to maintain and robust, but COMSTAR
seems to have sand-bagged it a bit. The performance was quite
acceptable before but it is pretty much unusable this way.

Any ideas would be much appreciated

Thanks -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fishworks on x4275?

2009-10-18 Thread Trevor Pretty




Frank

I've been looking into:- 
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection&id=4&Itemid=128

Only played with a VM so far on my laptop, but it does seem to be an
alternative to the Sun product if you don't want to buy a S7000.

IMHO: Sun are missing a great opportunity not offering a reasonable
upgrade path from an X to an S7000.







Trevor Pretty 
| Technical Account Manager
|
T: +64 9 639 0652 |
M: +64 21 666 161

Eagle Technology Group Ltd. 
Gate D, Alexandra Park, Greenlane West, Epsom

Private Bag 93211, Parnell, Auckland



Frank Cusack wrote:

  Apologies if this has been covered before, I couldn't find anything
in my searching.

Can the software which runs on the 7000 series servers be installed
on an x4275?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz 
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshots & rsync --delete

2009-10-18 Thread Richard Elling

On Oct 18, 2009, at 11:23 AM, Sander Smeenk wrote:


Quoting Chris Kirby (chris.ki...@sun.com):


If you have a snapshot named "p...@snap", this:
# rmdir /pool/.zfs/snapshot/snap
is equivalent to this:
# zfs destroy p...@snap

Similarly, this:
# mkdir /pool/.zfs/snapshot/snap
is equivalent to this:
# zfs snapshot p...@snap

This can be very handy if you want to create or destroy
a snapshot from an NFS client, for example.


Oh, right. This is where my newbieness kicks in. I didn't know just
doing a mkdir in the snapshot directory actually -creates- a snapshot.
Thanks for this clarification.

I think i will toggle snapdir=visible/hidden to overcome this problem,
or add it to --excludes perhaps.


Yes. This is why the snapshots are not visible by default... legacy
backup software (eg tar) would want to back them up, which would be
redundant, redundant.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6399899
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss