> Also, (Richard can address this better than I) you may want to disable
> the ZIL or have your array ignore the write cache flushes that ZFS issues.
The latter is quite a reasonable thing to do, since the array has
battery-backed cache.
The ZIL should almost [b]never[/b] be disabled. The only r
Hi Neal,
Currently the GUI requires you to add sides to a mirror as an
additional step.
Steve
Neal Weiss wrote:
> I would like to create the following pool using the zfs gui:
>
> zpool create tank mirror c0t7d0 c1t7d0 mirror c4t7d0 c5t7do mirror
> c6t7d0 c7t7d0
>
> The gui does not seem to let
Jochen M. Kaiser wrote:
Didn't find any decent SAS controllers though, qlogic has some,
but the PCIe model with two external ports isn't supported on
Solaris. The single port model would work though...
We (Sun) sell LSI 1064-based SAS/SATA controllers. There should be
several sources of these
On a recent journey of pain and frustration, I had to recover a UFS
filesystem from a broken disk. The disk had many bad blocks and more
were going bad over time. Sadly, there were just a few files that I
wanted, but I could not mount the disk without it killing my system.
(PATA disks... PITA i
Al,
> > Being a friend of simplicity I was thinking about
> using a pair (or more) of 3320
> > SCSI JBODs with multiple RAIDZ and/or RAID10 zfs
> disk pools on which we'd
>
> Have you not heard that SCSI is dead? :)
> While I understand you don't want to build a SAN, an
> alternative would be
Robert,
> It's not that bad with CPU usage.
> For example with RAID-Z2 while doing scrub I get
> something like
> 800MB/s read from disks (550-600MB/s from zpool
> iostat perspective)
> and all four cores are mostly consumed - I get
> something like 10% idle
> on each cpu.
===
But in the end this
On Wed, 2006-12-13 at 10:24 -0800, Richard Elling wrote:
> > I've seen two cases of disk failure where errors only occurred during
> > random I/O; all blocks were readable sequentially; in both cases, this
> > permitted the disk to be replaced without data loss and without
> > resorting to backups
Patrick P Korsnick wrote:
i have a machine with a disk that has some sort of defect and
i've found that if i partition only half of the disk that the
machine will still work. i tried to use 'format' to scan the
disk and find the bad blocks, but it didn't work.
By "it didn't work" did you mea
Robert Milkowski wrote:
Hello Torrey,
Tuesday, December 12, 2006, 11:40:42 PM, you wrote:
TM> Robert Milkowski wrote:
Hello Matthew,
MCA> Also, I am considering what type of zpools to create. I have a
MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a
MCA> JBOD (at lesat
Bill Sommerfeld wrote:
On Tue, 2006-12-12 at 22:49 -0800, Patrick P Korsnick wrote:
i have a machine with a disk that has some sort of defect and i've
found that if i partition only half of the disk that the machine will
still work. i tried to use 'format' to scan the disk and find the bad
bloc
> i have a machine with a disk that has some sort of
> defect and i've found that if i partition only half
> of the disk that the machine will still work.
"will still work" ... for now.
Don't keep using this disk. Chances are that something really bad has happened
to it (e.g. the head has scrap
> I view undetected in-memory errors from a hardware perspective,
> not as a software bug. Clearly, software bugs can exist, but
> we presume testing will find these.
Sure. My point is simply that, given that we have a monolithic kernel, any bug
in kernel or driver code can corrupt any memory i
> This is probably an attempt to 'short-stroke' a larger disk with the
> intention utilising only a small ammount of the disk surface, as a
> technique it used to be quite common for certain apps (notably DBs).
> Hence you saw deployments of quite large disks but with perhaps only
> 1/4-1/2
Anton B. Rang wrote:
If the SCSI commands hang forever, then there is nothing that ZFS can
do, as a single write will never return. The more likely case is that
the commands are continually timining out with very long response times,
and ZFS will continue to talk to them forever.
It looks like
Anton B. Rang wrote:
Also note that the UB is written to every vdev (4 per disk) so the
chances of all UBs being corrupted is rather low.
The chances that they're corrupted by the storage system, yes.
However, they are all sourced from the same in-memory buffer, so
an undetected in-memory err
I would like to create the following pool using the zfs gui:
zpool create tank mirror c0t7d0 c1t7d0 mirror c4t7d0 c5t7do mirror c6t7d0 c7t7d0
The gui does not seem to let me create multiple vdevs in a pool at the same
time. I know I can go back and add the mirrors later on, but I would like to
Kory Wheatley wrote:
The Luns will be on separate "SPA" controllers"not on all
the same controller, so that's why I thought if we split
our data on different disks and ZFS Storage Pools we would
get better IO performance. Correct?
The way to think about it is that, in general, for best
perfo
This is probably an attempt to 'short-stroke' a larger disk with the
intention utilising only a small ammount of the disk surface, as a
technique it used to be quite common for certain apps (notably DBs).
Hence you saw deployments of quite large disks but with perhaps only
1/4-1/2 physical
The Luns will be on separate "SPA" controllers"not on all the same controller,
so that's why I thought if we split our data on different disks and ZFS Storage
Pools we would get better IO performance. Correct?
This message posted from opensolaris.org
_
> $mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000
>
> The above command creates vxfs file system on first 2048000 blocks (each
> block size is 1024 bytes) of /dev/rdsk/c5t20d9s2 .
>
> Like this is there a option to limit the size of ZFS file system.? if
> so what it is ? how it i
> > Also note that the UB is written to every vdev (4 per disk) so the
> > chances of all UBs being corrupted is rather low.
>
> The chances that they're corrupted by the storage system, yes.
>
> However, they are all sourced from the same in-memory buffer, so an
> undetected in-memory error (e.
On Tue, 2006-12-12 at 22:49 -0800, Patrick P Korsnick wrote:
> i have a machine with a disk that has some sort of defect and i've
> found that if i partition only half of the disk that the machine will
> still work. i tried to use 'format' to scan the disk and find the bad
> blocks, but it didn't
Thanks, I just downloaded Update 3 and hopefully the problem will go away.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Neil,
Wednesday, December 13, 2006, 1:59:15 AM, you wrote:
NP> Tom Duell wrote On 12/12/06 17:11,:
>> Group,
>>
>> We are running a benchmark with 4000 users
>> simulating a hospital management system
>> running on Solaris 10 6/06 on USIV+ based
>> SunFire 6900 with 6540 storage array.
>>
Robert Milkowski wrote:
Hello Chris,
Wednesday, December 6, 2006, 6:23:48 PM, you wrote:
CG> One of our file servers internally to Sun that reproduces this
CG> running nv53 here is the dtrace output:
Any conclusions yet?
Not yet. We had to delete all the "automatic" snapshots we had so th
Hello Torrey,
Tuesday, December 12, 2006, 11:40:42 PM, you wrote:
TM> Robert Milkowski wrote:
>> Hello Matthew,
>>
>>
>> MCA> Also, I am considering what type of zpools to create. I have a
>> MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a
>> MCA> JBOD (at lesat that is what
Ok now this takes the "Most egregiously creative misuse of ZFS" award :-)
I doubt ZFS can help if badblocks "didn't work". It would help to know what was
the problem with it, but generally a destructive test reveals a lot.
OTOH, you can also do better by writing a small program which writes rand
Anton B. Rang writes:
> It took manufacturers of SCSI drives some years to get this
> right. Around 1997 or so we were still seeing drives at my former
> employer that didn't properly flush their caches under all
> circumstances (and had other "interesting" behaviours WRT caching).
>
> Lot
Hi Darren
Thanks for your reply.
You please take a deep look into the following command:
$mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000
The above command creates vxfs file system on first 2048000 blocks (each
block size is 1024 bytes) of /dev/rdsk/c5t20d9s2 .
The latency issue might improve with this rfe
6471212 need reserved I/O scheduler slots to improve I/O latency of critical
ops
-r
Tom Duell writes:
> Group,
>
> We are running a benchmark with 4000 users
> simulating a hospital management system
> running on Solaris 10 6/06 on USIV+ bas
30 matches
Mail list logo