Question to ZFS users who have deployed ZFS in a Tier 1 application env - What
uptime/availability are you seeing with ZFS? We are looking to deploy ZFS for
Tier 1 data accessible over NFS.
Thanks.
NV
This message posted from opensolaris.org
___
Michael Stalnaker wrote:
> I have a 24 disk SATA array running on Open Solaris Nevada, b78. We had
> a drive fail, and I’ve replaced the device but can’t get the system to
> recognize that I replaced the drive.
>
> Zpool status –v shows the failed drive:
>
> [EMAIL PROTECTED] ~]$ zpool status -
> I thought RAIDZ would correct data errors automatically with the parity data.
Right. However, if the data is corrupted while in memory (e.g. on a PC
with non-parity memory), there's nothing ZFS can do to detect that.
I mean, not even theoretically. The best we could do would be to
narrow the w
If you can't file a RFE yourself (with the attached diffs), then
yeah, i'd like to see them so i can do it.
cool stuff,
eric
On Feb 26, 2008, at 4:35 AM, [EMAIL PROTECTED] wrote:
> Hi All,
> I have modified zdb to do decompression in zdb_read_block. Syntax is:
>
> # zdb -R poolname:devid:blkn
Bob Friesenhahn wrote:
> The Sun Update Manager on my x86 Solaris 10 box describes this new
> patch as "SunOS 5.10_x86 nfs fs patch" (note use of "nfs") but looking
> at the problem descriptions this is quite clearly a big ZFS patch that
> Solaris 10 users should pay attention to since it fixes
The Sun Update Manager on my x86 Solaris 10 box describes this new
patch as "SunOS 5.10_x86 nfs fs patch" (note use of "nfs") but looking
at the problem descriptions this is quite clearly a big ZFS patch that
Solaris 10 users should pay attention to since it fixes a bunch of
nasty bugs.
Maybe
Fixed - what was needed is an export, followed by an import -f
From: Justin Vassallo [mailto:[EMAIL PROTECTED]
Sent: 29 February 2008 15:13
To: zfs-discuss@opensolaris.org
Subject: zfs pool unavailable!
Hello,
I have a zfs pool on 3 external disks, connected via usb. All 3 disks are
fi
David Jackson wrote:
>> I'm looking for an authoritative list of the patches that should be
>> applied for ZFS for the commercial version of Solaris. A
>> centralized URL that is maintained would be ideal. Can someone
>> reply back to me with one as I'm not a subscriber to the news list.
>>
On Fri, 29 Feb 2008, Justin Vassallo wrote:
> # zpool status
> pool: external
> state: FAULTED
> status: One or more devices could not be opened. There are insufficient
>replicas for the pool to continue functioning.
> action: Attach the missing device and online it using 'zpool online'.
>
> I'm looking for an authoritative list of the patches that should be
> applied for ZFS for the commercial version of Solaris. A
> centralized URL that is maintained would be ideal. Can someone
> reply back to me with one as I'm not a subscriber to the news list.
>
>
> David Jackson
> [EM
Hello,
I have a zfs pool on 3 external disks, connected via usb. All 3 disks are
fine and can be seen from rmformat. They all appear on the same nodes as
they were before the restart (this problem started following a reboot).
However, the zfs system is not recognizing them.
Any clues?
Than
Hi,
great, thank you. So ZFS isn't picky about finding the target fs already
created and attributed when replicating data into it.
This is very cool!
Best regards,
Constantin
Darren J Moffat wrote:
> Constantin Gonzalez wrote:
>> Hi Darren,
>>
>> thank you for the clarification, I didn't kn
Constantin Gonzalez wrote:
> Hi Darren,
>
> thank you for the clarification, I didn't know that.
>
>> See the man page for zfs(1) where the -R options for send is discussed.
> Back to Brad's RFS, what would one need to do to send a stream from a
> compressed filesystem to one with a different c
Hi Darren,
thank you for the clarification, I didn't know that.
> See the man page for zfs(1) where the -R options for send is discussed.
oh, this is new. Thank you for bringing us -R.
Back to Brad's RFS, what would one need to do to send a stream from a
compressed filesystem to one with a diff
Constantin Gonzalez wrote:
> Hi Brad,
>
> this is indeed a good idea.
>
> But I assume that it will be difficult to do, due to the low-level nature
> of zfs send/receive.
>
> In your compression example, you're asking for zfs send/receive to
> decompress the blocks on the fly. But send/receive o
Hi Brad,
this is indeed a good idea.
But I assume that it will be difficult to do, due to the low-level nature
of zfs send/receive.
In your compression example, you're asking for zfs send/receive to
decompress the blocks on the fly. But send/receive operates on a lower
level: It doesn't care muc
I love the send and receive feature of zfs. However, the one feature
that it lacks is that I can't specify on the receive end how I want
the destination zfs filesystem to be be created before receiving the
data being sent.
For example, lets say that I would like to do a compression study to
d
I have a 24 disk SATA array running on Open Solaris Nevada, b78. We had a
drive fail, and I¹ve replaced the device but can¹t get the system to
recognize that I replaced the drive.
Zpool status v shows the failed drive:
[EMAIL PROTECTED] ~]$ zpool status -v
pool: LogData
state: DEGRADED
status
18 matches
Mail list logo