Well yeah, this is obviously not a valid setup for my data, but if you read my
first e-mail, the whole point of this test was that I had seen Solaris hang
when a drive was removed from a fully redundant array (five sets of three way
mirrors), and wanted to see what was going on.
So I started wi
A question regarding zfs_nocacheflush:
The Evil Tuning Guide says to only enable this if every device is
protected by NVRAM.
However, is it safe to enable zfs_nocacheflush when I also have
local drives (the internal system drives) using ZFS, in particular if
the write cache is disabled on those d
> waynel wrote:
> >
> > We have a couple of machines similar to your just
> > spec'ed. They have worked great. The only
> problem
> > is, the power management routine only works for
> K10
> > and later. We will move to Intel core 2 duo for
> > future machines (mainly b/c power management
> > co
Hello Bob,
Wednesday, July 30, 2008, 3:07:05 AM, you wrote:
BF> On Wed, 30 Jul 2008, Robert Milkowski wrote:
>>
>> Both cases are basically the same.
>> Please notice I'm not talking about disabling ZIL, I'm talking about
>> disabling cache flushes in ZFS. ZFS will still wait for the array to
>>
Hello Peter,
Wednesday, July 30, 2008, 9:19:30 AM, you wrote:
PT> A question regarding zfs_nocacheflush:
PT> The Evil Tuning Guide says to only enable this if every device is
PT> protected by NVRAM.
PT> However, is it safe to enable zfs_nocacheflush when I also have
PT> local drives (the intern
Hello Sam,
Wednesday, July 30, 2008, 3:23:55 AM, you wrote:
S> I've had my 10x500 ZFS+ running for probably 6 months now and had
S> thought it was scrubbing occasionally (wrong) so I started a scrub
S> this morning, its almost done now and I got this:
S> errors: No known data errors
S> # zpool s
"Chris Cosby" <[EMAIL PROTECTED]> wrote:
> If they are truly limited, something like an rsync or similar. There was a
> script being thrown around a while back that was touted as the Best Backup
> Script That Doesn't Do Backups, but I can't find it. In essence, it just
> created a list of what cha
eric kustarz <[EMAIL PROTECTED]> wrote:
> > Best Backup Script That Doesn't Do Backups, but I can't find it. In
> > essence, it just created a list of what changed since the last
> > backup and allowed you to use tar/cpio/cp - whatever to do the backup.
>
> I think zfs send/recv would be a gre
I was just wondering what would happen in a raidz array if one of the drives
failed? I imagine everything would work fine for reading, but will I be able to
write to the array still?
If I can write, does that mean that replacing the dead drive with a working one
will catch up the new drive? I t
Hey Nils & everyone
Finally getting around to answering Nil's mail properly - only a month
late! I thought I'd also let everyone else know what's been going on
with the service, since 0.10 released in January this year.
On Tue, 2008-06-24 at 14:40 -0700, Nils Goroll wrote:
> first of all: Tim,
I'm considering making a zfs raid with slices until I can get the right hard
drive configuration to use full drives. What kind of performance difference is
there? Will it be just a bigger hit on cpu, or will it be a big hit because zfs
can no longer do any command queueing?
This message post
>It depends: if you like to be able to restore single files, zfs send/recv
would
>not be apropriate.
Why not?
With zfs you can easily view any file/dir from a snapshot (via the .zfs
dir). You can also copy that instance of the file into your running fs with
cp.
justin
smime.p7s
Description: S
Hi,
I've had 3 zfs file systems hang completely when one of the drives in their
pool fails. This has happened on both USB as well as internal SAS drives. In
/var/adm/messages, I'd get this kind of msg:
Jul 29 13:45:24 zen SCSI transport failed: reason 'timeout': retrying
command
Jul 29
Are you running the Solaris CIFS Server by any chance?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Just checking, are you planning to have the receiving ZFS system read only?
I'm not sure how ZFS receive works on a system if changes have been made, but I
would expect they would be overwritten.
This message posted from opensolaris.org
___
zfs-dis
On Wed, 30 Jul 2008, Ross wrote:
>
> Imagine you had a raid-z array and pulled a drive as I'm doing here.
> Because ZFS isn't aware of the removal it keeps writing to that
> drive as if it's valid. That means ZFS still believes the array is
> online when in fact it should be degrated. If any o
On Wed, 30 Jul 2008, Vanja wrote:
>
> And finally, if this is the case, is it possible to make an array
> with 3 drives, and then add the mirror later? I imagine this is
> extremely similar to the previous situation.
One of the joys of using mirrors is that you can add a mirror device,
and you
I agree that device drivers should perform the bulk of the fault monitoring,
however I disagree that this absolves ZFS of any responsibility for checking
for errors. The primary goal of ZFS is to be a filesystem and maintain data
integrity, and that entails both reading and writing data to the
I accidentally ran 'zpool create -f' on the wrong drive. The previously zfs
formatted and populated drive now appears blank. The operation was too quick to
have formatted the drive so it must just be the indexes/TOC that are lost.
I have not touched the newly created filesystem at all and the dr
On Wed, Jul 30, 2008 at 07:24, Vanja <[EMAIL PROTECTED]> wrote:
> I was just wondering what would happen in a raidz array if one of the drives
> failed? I imagine everything would work fine for reading, but will I be able
> to write to the array still?
Yes.
> If I can write, does that mean that
On Wed, 30 Jul 2008, Ross Smith wrote:
>
> I'm not saying that ZFS should be monitoring disks and drivers to
> ensure they are working, just that if ZFS attempts to write data and
> doesn't get the response it's expecting, an error should be logged
> against the device regardless of what the dri
Sorry I meant "add the parity drive".
I've got too much data to keep secondary backups, for now.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
Your point is well taken that ZFS should not duplicate functionality
that is already or should be available at the device driver level.In
this case, I think it misses the point of what ZFS should be doing that
it is not.
ZFS does its own periodic commits to the disk, and it knows if those
>This means you can convert a non-redundant load-shared configuration into a
redundant
>load-shared configuration.
Bob,
Does that imply that when you add zfs automatically load-balances across its
mirrors?
Does that also mean that when the drives in a mirror are not as fast as each
other, the fs
I've been reading about the work using flash SSD devices for ZIL and cache
devices. I was wondering if anyone knows what releases of Opensolaris and
Solaris these features are available on? The performance inporvements are
pretty dramatic.
Alastair
___
Which OS release/version?
-- richard
Justin Vassallo wrote:
>
> Hi,
>
> I’ve had 3 zfs file systems hang completely when one of the drives in
> their pool fails. This has happened on both USB as well as internal
> SAS drives. In /var/adm/messages, I’d get this kind of msg:
>
> Jul 29 13:45:24 ze
Alastair Neil wrote:
> I've been reading about the work using flash SSD devices for ZIL and
> cache devices. I was wondering if anyone knows what releases of
> Opensolaris and Solaris these features are available on? The
> performance inporvements are pretty dramatic.
You can keep track of so
Thanks very much that's exactly what I needed to hear :)
On Wed, Jul 30, 2008 at 12:47 PM, Richard Elling <[EMAIL PROTECTED]>wrote:
> Alastair Neil wrote:
>
>> I've been reading about the work using flash SSD devices for ZIL and cache
>> devices. I was wondering if anyone knows what releases of
I was able to reproduce this in b93, but might have a different
interpretation of the conditions. More below...
Ross Smith wrote:
> A little more information today. I had a feeling that ZFS would
> continue quite some time before giving an error, and today I've shown
> that you can carry on wo
Richard Elling wrote:
> I was able to reproduce this in b93, but might have a different
> interpretation of the conditions. More below...
>
> Ross Smith wrote:
>
>> A little more information today. I had a feeling that ZFS would
>> continue quite some time before giving an error, and today I'v
Peter Cudhea wrote:
> Your point is well taken that ZFS should not duplicate functionality
> that is already or should be available at the device driver level.In
> this case, I think it misses the point of what ZFS should be doing that
> it is not.
>
> ZFS does its own periodic commits to
Dear all.
I stumbled over an issue triggered by Samba while accessing ZFS snapshots.
As soon as a Windows client tries to open the .zfs/snapshot folder it
issues the Microsoft equivalent of "ls dir", "dir *". It get's translates
by Samba all the way down into stat64("/pool/.zfs/snapshot"*"). The
G'Day,
On Wed, Jul 30, 2008 at 01:24:22PM -0400, Alastair Neil wrote:
>
>Thanks very much that's exactly what I needed to hear :)
>On Wed, Jul 30, 2008 at 12:47 PM, Richard Elling
><[EMAIL PROTECTED]> wrote:
>
>Alastair Neil wrote:
>
> I've been reading about the work using
Thomas Nau wrote:
> Dear all.
> I stumbled over an issue triggered by Samba while accessing ZFS snapshots.
> As soon as a Windows client tries to open the .zfs/snapshot folder it
> issues the Microsoft equivalent of "ls dir", "dir *". It get's translates
> by Samba all the way down into stat64("
Thanks, this is helpful. I was definitely misunderstanding the part that
the ZIL plays in ZFS.
I found Richard Elling's discussion of the FMA response to the failure
very informative. I see how the device driver, the fault analysis
layer and the ZFS layer are all working together.Though the
All 3 boxes I had disk failures on are SunFire x4200 M2 running
Solaris 10 11/06 s10x_u3wos_10 X86 w the zfs it comes with, ie v3
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
Justin Vassallo wrote:
> All 3 boxes I had disk failures on are SunFire x4200 M2 running
>
> Solaris 10 11/06 s10x_u3wos_10 X86 w the zfs it comes with, ie v3
>
That version of ZFS is nearly 3 years old... has it been patched at all?
Even if it has been patched, its fault handling capabilities
Hi Richard,
The version on stable Solaris is v4 at best today. I definitely do not want
to go away from stable Solaris for my production environment, not least
because I want to continue my Solaris support contracts.
I will be attaching a Sun 2540FC array to these servers in the coming weeks
and
If I have 2 raidz's, 5x400G and a later added 5x1T, should I expect
that streaming writes would go primarily to only 1 of the raidz sets?
Or is this some side effect of my non-ideal hardware setup? I thought
that adding additional capacity to a pool automatically would then
balance writes to both
Justin Vassallo wrote:
> Hi Richard,
>
> The version on stable Solaris is v4 at best today. I definitely do not want
> to go away from stable Solaris for my production environment, not least
> because I want to continue my Solaris support contracts.
>
Don't confuse the ZFS on-disk format versio
Peter Cudhea wrote:
> Thanks, this is helpful. I was definitely misunderstanding the part that
> the ZIL plays in ZFS.
>
> I found Richard Elling's discussion of the FMA response to the failure
> very informative. I see how the device driver, the fault analysis
> layer and the ZFS layer are all w
Thank you for the feedback
Justin
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thomas Nau wrote:
> Dear all.
> I stumbled over an issue triggered by Samba while accessing ZFS snapshots.
> As soon as a Windows client tries to open the .zfs/snapshot folder it
> issues the Microsoft equivalent of "ls dir", "dir *". It get's translates
> by Samba all the way down into stat64("
From a reporting perspective, yes, zpool status should not hang, and
should report an error if a drive goes away, or is in any way behaving
badly. No arguments there. From the data integrity perspective, the
only event zfs needs to know about is when a bad drive is replaced, such
that a re
Hello Richard,
Thursday, July 3, 2008, 8:06:56 PM, you wrote:
RE> Albert Chin wrote:
>> On Thu, Jul 03, 2008 at 01:43:36PM +0300, Mertol Ozyoney wrote:
>>
>>> You are right that J series do not have nvram onboard. However most Jbods
>>> like HPS's MSA series have some nvram.
>>> The idea behi
Hello Rob,
Sunday, July 20, 2008, 12:11:56 PM, you wrote:
>> Robert Milkowski wrote:
>> During christmass I managed to add my own compression to zfs - it as quite
>> easy.
RC> Great to see innovation but unless your personal compression
RC> method is somehow better (very fast with excellent
R
If you're really crazy for miniaturization check out this:
http://www.elma.it/ElmaFrame.htm
Is a 4 hot swappable case for 2.5" drives that fits in 1 slot for 5.25!
You'll get low power consumption (= low heating) and will be easier to find a
mini itx case that fit just this and mobo! ;-)
Th
I'm testing out ZFS and AVS on two 64-bit snv86 systems that are
running as guests under VMWare Fusion.
I made up a zfs-pool on the primary on disks configured for AVS:
NAME SIZE USED AVAILCAP HEALTH ALTROOT
zfs-pool 15.6G 1.03G 14.6G 6% ONLINE -
And AVS seems to be work
Yeah but 2.5" aren't that big yet. What, they max out ~ 320 gig right?
I want 1tb+ disks :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I rebooted both systems and now it's working!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Steve schrieb:
> If you're really crazy for miniaturization check out this:
> http://www.elma.it/ElmaFrame.htm
>
> Is a 4 hot swappable case for 2.5" drives that fits in 1 slot for 5.25!
>
>
Maybe only true for Notebook 2,5" drives. Altough I haven't check I
don't think that 2,5" SAS disk with
Vanja gmail.com> writes:
>
> And finally, if this is the case, is it possible to make an array with
> 3 drives, and then add the mirror later?
I assume you are asking if it is possible to create a temporary 3-way raidz,
then transfer your data to it, then convert it to a 4-way raidz ? No it is
52 matches
Mail list logo