On 12 December, 2006 - Patrick P Korsnick sent me these 1,1K bytes:
> i have a machine with a disk that has some sort of defect and i've
> found that if i partition only half of the disk that the machine will
> still work. i tried to use 'format' to scan the disk and find the bad
> blocks, but it
i have a machine with a disk that has some sort of defect and i've found that
if i partition only half of the disk that the machine will still work. i tried
to use 'format' to scan the disk and find the bad blocks, but it didn't work.
so as i don't know where the bad blocks are but i'd still li
> If the SCSI commands hang forever, then there is nothing that ZFS can
> do, as a single write will never return. The more likely case is that
> the commands are continually timining out with very long response times,
> and ZFS will continue to talk to them forever.
It looks like the sd driver d
It took manufacturers of SCSI drives some years to get this right. Around 1997
or so we were still seeing drives at my former employer that didn't properly
flush their caches under all circumstances (and had other "interesting"
behaviours WRT caching).
Lots of ATA disks never did bother to impl
> http://www.norcotek.com/item_detail.php?categoryid=8&modelno=DS-1220
yea SiI3726 Multipliers, are cool..
http://cooldrives.com/cosapomubrso.html
http://cooldrives.com/mac-port-multiplier-sata-case.html
but finding PCI-X slots for Ying Tian's si3124 or marvell88sx
cards are getting tricky.. even
> Were looking for pure performance.
>
> What will be contained in the LUNS is Student User
> account files that they will access and Department
> Share files like, MS word documents, excel files,
> PDF. There will be no applications on the ZFS
> Storage pools or pool Does this help on what
> s
> Also note that the UB is written to every vdev (4 per disk) so the
> chances of all UBs being corrupted is rather low.
The chances that they're corrupted by the storage system, yes.
However, they are all sourced from the same in-memory buffer, so an undetected
in-memory error (e.g. kernel bug
I think you may be observing that fsync() is slow.
The file will be written, and visible to other processes via the in-memory
cache, before the data has been pushed to disk. vi forces the data out via
fsync, and that can be quite slow when the file system is under load,
especially before a fix
Thanks, Neil, for the assistance.
Tom
Neil Perrin wrote On 12/12/06 19:59,:
>Tom Duell wrote On 12/12/06 17:11,:
>
>
>>Group,
>>
>>We are running a benchmark with 4000 users
>>simulating a hospital management system
>>running on Solaris 10 6/06 on USIV+ based
>>SunFire 6900 with 6540 storage ar
I'm observing the following behavior on our E2900 (24 x 92 config), 2 FCs, and
... I've a large filesystem (~758GB) with compress mode on. When this
filesystem is under heavy load (>150MB/S) I've problems saving files in 'vi'. I
posted here about it and recall that the issue is addressed in Sol1
Tom Duell wrote On 12/12/06 17:11,:
Group,
We are running a benchmark with 4000 users
simulating a hospital management system
running on Solaris 10 6/06 on USIV+ based
SunFire 6900 with 6540 storage array.
Are there any tools for measuring internal
ZFS activity to help us understand what is g
> Hello Toby,
>
> Tuesday, December 12, 2006, 4:18:54 PM, you wrote:
> TT> On 12-Dec-06, at 9:46 AM, George Wilson wrote:
>
> >> Also note that the UB is written to every vdev (4 per disk) so the
> >> chances of all UBs being corrupted is rather low.
>
> It depends actually - if all your vdevs
Group,
We are running a benchmark with 4000 users
simulating a hospital management system
running on Solaris 10 6/06 on USIV+ based
SunFire 6900 with 6540 storage array.
Are there any tools for measuring internal
ZFS activity to help us understand what is going
on during slowdowns?
We have 192GB
Also there will be no NFS services on this system.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Were looking for pure performance.
What will be contained in the LUNS is Student User account files that they will
access and Department Share files like, MS word documents, excel files, PDF.
There will be no applications on the ZFS Storage pools or pool Does this help
on what strategy might
Anantha N. Srirama wrote:
- Why is the destroy phase taking so long?
Destroying clones will be much faster with build 53 or later (or the
unreleased s10u4 or later) -- see bug 6484044.
- What can explain the unduly long snapshot/clone times
- Why didn't the Zone startup?
- More surprisi
> PS> While I do intend to perform actual powerloss tests, it would be
> interesting PS> to hear from anybody whether it is generally expected to be
> safe.
>
> Well is disks honors cache flush commands then it should be reliable
> wether it's SATA or SCSI disk.
Yes. Sorry, I could have stated my
Hi Kory,
It depends on the capabilities of your array in our experience...and
also the zpool type. If you're going to do RAID-Z in a write intensive
environment you're going to have a lot more I/Os with three LUNs then
a single large LUN. Your controller may go nutty.
Also, (Richard can address
Robert Milkowski wrote:
Hello Matthew,
MCA> Also, I am considering what type of zpools to create. I have a
MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a
MCA> JBOD (at lesat that is what I remember) I guess I am going to
MCA> have to add in the LUNS in a mirrored zpool of
Hello Peter,
Tuesday, December 12, 2006, 11:18:32 PM, you wrote:
PS> Hello,
PS> my understanding is that ZFS is specifically designed to work with write
PS> caching, by instructing drives to flush their caches when a write barrier is
PS> needed. And in fact, even turns write caching on explicitl
Hello Anton,
Tuesday, December 12, 2006, 9:36:41 PM, you wrote:
ABR> Is there an easy way to determine whether a pool has this fix applied or
not?
Yep.
Just do 'df -h' and see what is a reported size of a pool. It should
be something like N-1 times disk size for each raid-z group. If it is
N t
Hello,
my understanding is that ZFS is specifically designed to work with write
caching, by instructing drives to flush their caches when a write barrier is
needed. And in fact, even turns write caching on explicitly on managed
devices.
My question is of a practical nature: will this *actually
> NetApp can actually grow their RAID groups, but they recommend adding
> an entire RAID group at once instead. If you add a disk to a RAID
> group on NetApp, I believe you need to manually start a reallocate
> process to balance data across the disks.
There's no reallocation process that I'm awar
Are you looking purely for performance, or for the added reliability that ZFS
can give you?
If the latter, then you would want to configure across multiple LUNs in either
a mirrored or RAID configuration. This does require sacrificing some storage in
exchange for the peace of mind that any “si
Is there an easy way to determine whether a pool has this fix applied or not?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Are you looking purely for performance, or for the added reliability that ZFS
can give you?
If the latter, then you would want to configure across multiple LUNs in either
a mirrored or RAID configuration. This does require sacrificing some storage in
exchange for the peace of mind that any “sil
Eric Schrock wrote:
> Hmmm, it means that we correctly noticed that the device had failed, but
> for whatever reason the ZFS FMA agent didn't correctly replace the
> drive. I am cleaning up the hot spare behavior as we speak so I will
> try to reproduce this.
Ok, great.
>> Well, as long as I kn
On Tue, Dec 12, 2006 at 02:38:22PM -0500, James F. Hranicky wrote:
>
> Dec 11 14:42:32.1271 1319464e-7a8c-e65b-962e-db386e90f7f2 ZFS-8000-D3
> 100% fault.fs.zfs.device
>
> Problem in: zfs://pool=2646e20c1cb0a9d0/vdev=724c128cdbc17745
>Affects: zfs://pool=2646e20c1cb0a9d0/vd
Kory Wheatley wrote:
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array.
Here's are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the
best performance?
What I'm trying to ask is if you have 3 LUNS a
Matthew C Aycock wrote:
We are currently working on a plan to upgrade our HA-NFS cluster that
uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is
there a known procedure or best practice for this? I have enough free disk
space to recreate all the filesystems and copy the dat
Eric Schrock wrote:
> On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote:
>> Sure, but that's what I want to avoid. The FMA agent should do this by
>> itself, but it's not, so I guess I'm just wondering why, or if there's
>> a good way to get to do so. If this happens in the middle o
On 12/12/06, James F. Hranicky <[EMAIL PROTECTED]> wrote:
Jim Davis wrote:
>> Have you tried using the automounter as suggested by the linux faq?:
>> http://nfs.sourceforge.net/#section_b
>
> Yes. On our undergrad timesharing system (~1300 logins) we actually hit
> that limit with a standard au
On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote:
>
> Sure, but that's what I want to avoid. The FMA agent should do this by
> itself, but it's not, so I guess I'm just wondering why, or if there's
> a good way to get to do so. If this happens in the middle of the night I
> don't
> IIRC you have to re-create entire raid-z pool to get
> it fixed - just
> rewriting data or upgrading a pool won't do it.
You are correct ...
Now I have to find some place to stick +1TB of temp files ;)
Thanks for the help,
Jeb
This message posted from opensolaris.org
_
Jim Davis wrote:
>> Have you tried using the automounter as suggested by the linux faq?:
>> http://nfs.sourceforge.net/#section_b
>
> Yes. On our undergrad timesharing system (~1300 logins) we actually hit
> that limit with a standard automounting scheme. So now we make static
> mounts of the N
Eric Schrock wrote:
> On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote:
>> - I know I can attach it via the zpool commands, but is there a way to
>> kickstart the attachment process if it fails to attach automatically upon
>> disk failure?
>
> Yep. Just do a 'zpool replace zmir '. T
Jeb Campbell wrote:
After upgrade you did actually re-create your raid-z
pool, right?
No, but I did "zpool upgrade -a".
Hmm, I guess I'll try re-writing the data first. I know you have to do that if
you change compression options.
Ok -- rewriting the data doesn't work ...
I'll create a new
Hello Matthew,
Tuesday, December 12, 2006, 7:13:47 PM, you wrote:
MCA> We are currently working on a plan to upgrade our HA-NFS cluster
MCA> that uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10
MCA> and ZFS. Is there a known procedure or best practice for this? I
MCA> have enough free
On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote:
>
> - I know I can attach it via the zpool commands, but is there a way to
> kickstart the attachment process if it fails to attach automatically upon
> disk failure?
Yep. Just do a 'zpool replace zmir '. This is what the
FMA agent
Hello Jeb,
Tuesday, December 12, 2006, 7:11:30 PM, you wrote:
>> After upgrade you did actually re-create your raid-z
>> pool, right?
JC> No, but I did "zpool upgrade -a".
JC> Hmm, I guess I'll try re-writing the data first. I know you have
JC> to do that if you change compression options.
II
[b]Setting:[/b]
We've operating in the following setup for well over 60 days.
- E2900 (24 x 92)
- 2 2Gbps FC to EMC SAN
- Solaris 10 Update 2 (06/06)
- ZFS with compression turned on
- Global zone + 1 local zone (sparse)
- Local zone is fed ZFS clones from the global Zone
[b]Daily Routine
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC
disk array. Here's are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for
the best performance?
What I'm trying to ask is if you have 3 LUNS and you want to create a
We are currently working on a plan to upgrade our HA-NFS cluster that uses
HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is there a
known procedure or best practice for this? I have enough free disk space to
recreate all the filesystems and copy the data if necessary, but would
Hello Jason,
Thursday, December 7, 2006, 11:18:17 PM, you wrote:
JJWW> Hi Luke,
JJWW> That's terrific!
JJWW> You know you might be able to tell ZFS which disks to look at. I'm not
JJWW> sure. It would be interesting, if anyone with a Thumper could comment
JJWW> on whether or not they see the im
> After upgrade you did actually re-create your raid-z
> pool, right?
No, but I did "zpool upgrade -a".
Hmm, I guess I'll try re-writing the data first. I know you have to do that if
you change compression options.
Ok -- rewriting the data doesn't work ...
I'll create a new temp pool and see
Hello Toby,
Tuesday, December 12, 2006, 4:18:54 PM, you wrote:
TT> On 12-Dec-06, at 9:46 AM, George Wilson wrote:
>> Also note that the UB is written to every vdev (4 per disk) so the
>> chances of all UBs being corrupted is rather low.
It depends actually - if all your vdevs are on the same
> Hi All,
>
> Assume the device c0t0d0 size is 10 KB.
> I created ZFS file system on this
> $ zpool create -f mypool c0t0d0s2
This creates a pool on the entire slice.
> and to limit the size of ZFS file system I used quota property.
>
> $ zfs set quota = 5000K mypool
Note
> But seriously, the big issue with SCSI, is that the SCSI commands are sent
> over the SCSI bus at the original (legacy) rate of 5 Mbits/Sec in 8-bit
> mode.
Actually, this isn't true on the newest (Ultra320) SCSI systems, though I don't
know if the 3320 supports packetized SCSI. It's definitel
Hello Jeb,
Tuesday, December 12, 2006, 6:04:36 PM, you wrote:
JC> I updated to Sol10u3 last night, and I'm still seeing different
JC> differences between "du -h" and "ls -h".
JC> "du" seems to take into account raidz and compression -- if this is
correct, please let me know.
JC> It makes sense
Hello Chris,
Wednesday, December 6, 2006, 6:23:48 PM, you wrote:
CG> One of our file servers internally to Sun that reproduces this
CG> running nv53 here is the dtrace output:
Any conclusions yet?
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
I updated to Sol10u3 last night, and I'm still seeing different differences
between "du -h" and "ls -h".
"du" seems to take into account raidz and compression -- if this is correct,
please let me know.
It makes sense that "du" reports actual disk usage, but this makes some scripts
I wrote very
On Dec 12, 2006, at 10:02, Al Hopper wrote:
Another possiblity, which is on my todo list to checkout, is:
http://www.norcotek.com/item_detail.php?categoryid=8&modelno=DS-1220
I would not go with this device. I picked up one along with 12 500GB
SATA drives with the hopes of making a dumpin
>
> Not right now (without a bunch of shell-scripting).
> I'm working on
> eing able to "send" a whole tree of filesystems &
> their snapshots.
> Would that do what you want?
Exactly! When you think that -really useful- feature will be available?
thanks,
gino
This message posted from ope
> [...] there is no possibility of referencing an overwritten
> block unless you have to back off more than two uberblocks. At this
> point, blocks that have been overwritten will show up as corrupted (bad
> checksums).
Hmmm. Is there some way we can warn the user to scrub their pool because we
Jim Hranicky wrote:
Now having said that I personally wouldn't have
expected that zpool export should have worked as easily as that while
there where shared filesystems. I would have expected that exporting
the pool should have attempted to unmount all the ZFS filesystems first -
which would h
> UFS will panic on EIO also. Most other file systems, too.
In which cases will UFS panic on an I/O error?
A quick browse through the UFS code shows several cases where we can panic if
we have bad metadata on disk, but none if a disk read (or write) fails
altogether.
If UFS fails to read a bl
Bill Casale wrote:
Please reply directly to me. Seeing the message below.
Is it possible to determine exactly which file is corrupted?
I was thinking the OBJECT/RANGE info may be pointing to it
but I don't know how to equate that to a file.
This is bug:
6410433 'zpool status -v' would be more
On Fri, 8 Dec 2006, Jochen M. Kaiser wrote:
> Dear all,
>
> we're currently looking forward to restructure our hardware environment for
> our datawarehousing product/suite/solution/whatever.
>
> We're currently running the database side on various SF V440's attached via
> dual FC to our SAN backen
For my latest test I set up a stripe of two mirrors with one hot spare
like so:
zpool create -f -m /export/zmir zmir mirror c0t0d0 c3t2d0 mirror c3t3d0 c3t4d0
spare c3t1d0
I spun down c3t2d0 and c3t4d0 simultaneously, and while the system kept
running (my tar over NFS barely hiccuped), the zpoo
NetApp can actually grow their RAID groups, but they recommend adding an entire
RAID group at once instead. If you add a disk to a RAID group on NetApp, I
believe you need to manually start a reallocate process to balance data across
the disks.
This message posted from opensolaris.org
__
On 12-Dec-06, at 9:46 AM, George Wilson wrote:
Also note that the UB is written to every vdev (4 per disk) so the
chances of all UBs being corrupted is rather low.
Furthermore the time window where UBs are mutually inconsistent would
be very short, since they'd be updated together?
--Tob
Hello Jim,
Wednesday, December 6, 2006, 3:28:53 PM, you wrote:
JD> We have two aging Netapp filers and can't afford to buy new Netapp gear,
JD> so we've been looking with a lot of interest at building NFS fileservers
JD> running ZFS as a possible future approach. Two issues have come up in the
J
[EMAIL PROTECTED] wrote:
Hello Casper,
Tuesday, December 12, 2006, 10:54:27 AM, you wrote:
So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will
become corrupt through something that doesn't also make all the data
also corrupt or inaccessible.
CDSC> So how does this work for
Also note that the UB is written to every vdev (4 per disk) so the
chances of all UBs being corrupted is rather low.
Thanks,
George
Darren Dunham wrote:
DD> To reduce the chance of it affecting the integrety of the filesystem,
DD> there are multiple copies of the UB written, each with a checks
Maybe this will help:
http://blogs.sun.com/roch/entry/zfs_and_directio
-r
dudekula mastan writes:
> Hi All,
>
> We have directio() system to do DIRECT IO on UFS file system. Can
> any one know how to do DIRECT IO on ZFS file system.
>
> Regards
> Masthan
>
>
_
Hello Jochen,
Sunday, December 10, 2006, 10:51:57 AM, you wrote:
JMK> James,
>> Just a thought.
>>
>> have you thought about giving thumper x4500's a trial
>> for this work
>> load? Oracle would seem to be IO limited in the end
>> so 4 cores may be
>> enough to keep oracle happy when linked wi
Bill,
If you want to find the file associated with the corruption you could do
a "find /u01 -inum 4741362" or use the output of "zdb -d u01" to
find the object associated with that id.
Thanks,
George
Bill Casale wrote:
Please reply directly to me. Seeing the message below.
Is it possib
>Hello Casper,
>
>Tuesday, December 12, 2006, 10:54:27 AM, you wrote:
>
>>>So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will
>>>become corrupt through something that doesn't also make all the data
>>>also corrupt or inaccessible.
>
>
>CDSC> So how does this work for data which i
Hello Bill,
Tuesday, December 12, 2006, 2:34:01 PM, you wrote:
BC> Please reply directly to me. Seeing the message below.
BC> Is it possible to determine exactly which file is corrupted?
BC> I was thinking the OBJECT/RANGE info may be pointing to it
BC> but I don't know how to equate that to a f
Hello dudekula,
Tuesday, December 12, 2006, 9:36:24 AM, you wrote:
>
Hi All,
We have directio() system to do DIRECT IO on UFS file system. Can any one know how to do DIRECT IO on ZFS file system.
Right now you can't.
--
Best regards,
Robert mailto:
Please reply directly to me. Seeing the message below.
Is it possible to determine exactly which file is corrupted?
I was thinking the OBJECT/RANGE info may be pointing to it
but I don't know how to equate that to a file.
# zpool status -v
pool: u01
state: ONLINE
status: One or more devices
For the record, this happened with a new filesystem. I didn't
muck about with an old filesystem while it was still mounted,
I created a new one, mounted it and then accidentally exported
it.
> > Except that it doesn't:
> >
> > # mount /dev/dsk/c1t1d0s0 /mnt
> > # share /mnt
> > # umount /mnt
> >
Boyd Adamson wrote:
On 12/12/2006, at 8:48 AM, Richard Elling wrote:
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. Should I file this as a bug, or
should I just
Hello Casper,
Tuesday, December 12, 2006, 10:54:27 AM, you wrote:
>>So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will
>>become corrupt through something that doesn't also make all the data
>>also corrupt or inaccessible.
CDSC> So how does this work for data which is freed and
On 12 December, 2006 - dudekula mastan sent me these 2,7K bytes:
>
> Hi All,
>
> Assume the device c0t0d0 size is 10 KB.
>
> I created ZFS file system on this
>
> $ zpool create -f mypool c0t0d0s2
>
> and to limit the size of ZFS file system I used quota property.
>
Hi All,
Assume the device c0t0d0 size is 10 KB.
I created ZFS file system on this
$ zpool create -f mypool c0t0d0s2
and to limit the size of ZFS file system I used quota property.
$ zfs set quota = 5000K mypool
Which 5000 K bytes are belongs (or reserved) t
[EMAIL PROTECTED] looks like the more appropriate list to
post questions like yours.
dudekula mastan wrote:
Hi Everybody,
I have some problems in solaris 10 installation.
After installing the first CD , I removed the CD from CDrom , after that the machine is getting rebooting agai
>So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will
>become corrupt through something that doesn't also make all the data
>also corrupt or inaccessible.
So how does this work for data which is freed and overwritten; does
the system make sure that none of the data referenced by
Hi All,
We have directio() system to do DIRECT IO on UFS file system. Can any one
know how to do DIRECT IO on ZFS file system.
Regards
Masthan
-
Everyone is raving about the all-new Yahoo! Mail beta.___
zf
79 matches
Mail list logo