Brett wrote:
> Hi All,
>
> I've been reading through the documentation for ZFS and have noted in several
> blogs that ZFS should support more advanced layouts like RAID1+0, RAID5+0,
> etc. I am having a little trouble getting these more advanced configurations
> to play nicely.
>
> I have two di
Hi All,
I've been reading through the documentation for ZFS and have noted in several
blogs that ZFS should support more advanced layouts like RAID1+0, RAID5+0, etc.
I am having a little trouble getting these more advanced configurations to play
nicely.
I have two disk shelves, each with 9x 30
Hey Richard, thanks for sparking the conversation... This is a very
interesting topic (especially if you take it out of the HPC "we need
1000 servers to have this minimal boot image" space into general
purpose/enterprise computing)
--
Based on your earlier note, it appears you're not planning
Roch Bourbonnais wrote:
Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit :
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk un
Dropping in on this convo a little late, but here's something that
has been nagging me - gaining the ability to mirror two (or more)
RAIDZ sets.
A little background on why I'd really like to see this
I have two data centers on my campus and my FC-based SAN stretches
between them. Whe
Hey Richard, thanks for sparking the conversation... This is a very
interesting topic (especially if you take it out of the HPC "we need
1000 servers to have this minimal boot image" space into general
purpose/enterprise computing)
--
Based on your earlier note, it appears you're not planning
On Tue, 2007-05-29 at 18:48 -0700, Richard Elling wrote:
> The belief is that COW file systems which implement checksums and data
> redundancy (eg, ZFS and the ZFS copies option) will be redundant over
> CF's ECC and wear leveling *at the block level.* We believe ZFS will
> excel in this area, but
Ellis, Mike wrote:
Also the "unmirrored memory" for the rest of the system has ECC and
ChipKill, which provides at least SOME protection against random
bit-flips.
CF devices, at least the ones we'd be interested in, do have ECC as
well as spare sectors and write verification.
Note: flash memor
Also the "unmirrored memory" for the rest of the system has ECC and
ChipKill, which provides at least SOME protection against random
bit-flips.
--
Question: It appears that CF and friends would make a descent live-boot
(but don't run on me like I'm a disk) type of boot-media due to the
limited wr
Richard Elling wrote:
But I am curious as to why you believe 2x CF are necessary?
I presume this is so that you can mirror. But the remaining memory
in such systems is not mirrored. Comments and experiences are welcome.
CF == bit-rot-prone disk, not RAM. You need to mirror it for all the
sa
Robert Milkowski wrote:
Hello Richard,
Thursday, May 24, 2007, 6:10:34 PM, you wrote:
RE> Incidentally, thumper field reliability is better than we expected. This
is causing
RE> me to do extra work, because I have to explain why.
I've got some thumpers and there're very reliable.
Even disks a
Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit :
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some
kind of
fil
On May 29, 2007, at 1:25 PM, Lida Horn wrote:
Point one, the comments that Eric made do not give the complete
picture.
All the tests that Eric's referring to were done through ZFS
filesystem.
When sequential I/O is done to the disk directly there is no
performance
degradation at all.
Do
> When sequential I/O is done to the disk directly there is no performance
> degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some kind of
filesystem is used. All the tests that Eric and I have p
Point one, the comments that Eric made do not give the complete picture.
All the tests that Eric's referring to were done through ZFS filesystem.
When sequential I/O is done to the disk directly there is no performance
degradation at all. Second point, it does not take any additional
time in ldi
I've been looking into the performance impact of NCQ. Here's what i
found out:
http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Curiously, there's not too much performance data on NCQ available via
a google search ...
enjoy,
eric
Michael Barrett wrote:
Does ZFS handle a file system full situation any better than UFS? I had
a ZFS file system run at 100% full for a few days, deleted out the
offending files to bring it back down to 75% full, and now in certain
directories I cannot issue a ls -la (it hangs) but a ls works
Has anyone else noticed a significant zfs performance deterioration
when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a
full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow
compilation disabled; using an lzjb compressed zpool / zfs on
On Fri, 25 May 2007, Ben Rockwood wrote:
> May 25 23:32:59 summer unix: [ID 836849 kern.notice]
> May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
> May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf
> Page fault) rp=ff00232c3a80 addr=490 occurred in mo
Hi all,
I am trying to write a script to move disk partitions from one disk to another.
The ufs partitions are transfered using ufsdump and ufsrestore - quite easily.
My question is :
How can I do a dump and restore of a partition that contains a ZFS file system?
P.S.
My script would have acces
Perfect, i will try to play with that...
Regards,
Chris
On Tue, 29 May 2007, Cyril Plisko wrote:
On 5/29/07, Krzys <[EMAIL PROTECTED]> wrote:
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more
st
On 5/29/07, Krzys <[EMAIL PROTECTED]> wrote:
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more storage
to this pool (double the space) then start using it. Then what I wanted to do is
just take out the
Also, build 64 still had this bug:
6553537 (zfs root fails to boot from a snv_63+zfsboot-pfinstall
netinstall image)
which affects zfs roots set up with netinstall/dvdinstall, but not the
manual
install.
The bug is fixed in build 65.
And yes, the standard installation software still doesn'
Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more storage
to this pool (double the space) then start using it. Then what I wanted to do is
just take out the internal disks out of that pool and use SAN
Manoj Joseph wrote:
Michael Barrett wrote:
Normally if you have a ufs file system hit 100% and you have a very
high level of system and application load on the box (that resides in
the 100% file system) you will run into inode issues that require a
fsck and show themselves by not being about
dudekula mastan wrote:
Atleaset in my experience, I saw Corruptions when ZFS file system was
full. So far there is no way to check the file system consistency on ZFS
(to the best of my knowledge). ZFS people claiming that ZFS file system
is always consistent and there is no need for FSCK comman
dudekula mastan wrote:
Atleaset in my experience, I saw Corruptions when ZFS file system was
full. So far there is no way to check the file system consistency on ZFS
(to the best of my knowledge). ZFS people claiming that ZFS file system
is always consistent and there is no need for FSCK comman
Michael Barrett wrote:
Normally if you have a ufs file system hit 100% and you have a very high
level of system and application load on the box (that resides in the
100% file system) you will run into inode issues that require a fsck and
show themselves by not being about to long list out all
28 matches
Mail list logo