Thanks for the comprehensive replies!
I'll need some baby speak on this one though:
> The recommended use of whole disks is for drives with volatile write caches
> where ZFS will enable the cache if it owns the whole disk. There may be an
> RFE lurking here, but it might be tricky to correctly
MC wrote:
> The situation: a three 500gb disk raidz array. One disk breaks and you
> replace it with a new one. But the new 500gb disk is slightly smaller
> than the smallest disk in the array.
This is quite a problem for RAID arrays, too. It is why vendors use custom
labels for disks. Whe
[EMAIL PROTECTED] said:
> The situation: a three 500gb disk raidz array. One disk breaks and you
> replace it with a new one. But the new 500gb disk is slightly smaller than
> the smallest disk in the array.
> . . .
> So I figure the only way to build smaller-than-max-disk-size functionality
>
RL wrote:
> Hi,
>
> Does ZFS flag blocks as bad so it knows to avoid using them in the future?
>
> During testing I had huge numbers of unrecoverable checksum errors, which I
> resolved by disabling write caching on the disks.
Were the errors logged during writes, or during reads?
Can you share
We mostly rely on AMANDA, but for a simple, compressed, encrypted,
tape-spanning alternative backup (intended for disaster recovery) we use:
tar cf - | lzf (quick compression utility) | ssl (to encrypt) | mbuffer
(which writes to tape and looks after tape changes)
Recovery is exactly the oppos
If you have disks to experiment on & corrupt (and you will!) try this:
System A mounts the SAN [b]disk[/b] and format w/ UFS
System A umounts [b]disk[/b]
System B mounts [b]disk[/b]
B runs [i]touch x[/i] on [b]disk[/b].
System A mounts [b]disk[/b]
System A and B umount [b]disk[/b]
System B [i]fsck
The following seems much more complicated, much less
supported, and much more prone to failure than just setting up Sun
Cluster on the nodes and using it just for HA storage and the Global
File System. You do not have to put the Oracle RAC instances under Sun
Cluster control.
On 8/25/07, M
The ZFS version pages (
http://www.google.ca/search?hl=en&safe=off&rlz=1B3GGGL_enCA220CA220&q=+site:www.opensolaris.org+zfs+version
) are undocumented on the main page, as far as I can see.
The root /versions/ directory should be listed on the main ZFS page somewhere,
and contain a list of al
On Tue, 28 Aug 2007, David Olsen wrote:
>> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
[ ... ]
>>> I don't see why multiple UFS mounts wouldn't work,
>> if only one
>>> of them has write access. Can you elaborate?
>>
>> Even with a single writer you would need to be
>> concerned with re
On Tue, 28 Aug 2007, Charles DeBardeleben wrote:
> Are you sure that UFS writes a-time on read-only filesystems? I do not think
> that it is supposed to. If it does, I think that this is a bug. I have
> mounted
> read-only media before, and not gotten any write errors.
>
> -Charles
I think what m
>> It's worse than this. Consider the read-only clients. When you
>> access a filesystem object (file, directory, etc.), UFS will write
>> metadata to update atime. I believe that there is a noatime option to
>> mount, but I am unsure as to whether this is sufficient.
>
>Is this some particular
Are you sure that UFS writes a-time on read-only filesystems? I do not think
that it is supposed to. If it does, I think that this is a bug. I have
mounted
read-only media before, and not gotten any write errors.
-Charles
David Olsen wrote:
>> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote
> It's worse than this. Consider the read-only clients. When you
> access a filesystem object (file, directory, etc.), UFS will write
> metadata to update atime. I believe that there is a noatime option to
> mount, but I am unsure as to whether this is sufficient.
Is this some particular build
The situation: a three 500gb disk raidz array. One disk breaks and you replace
it with a new one. But the new 500gb disk is slightly smaller than the
smallest disk in the array.
I presume the disk would not be accepted into the array because the zpool
replace entry on the zpool man page say
It sounds like you are looking for a shared file system like Sun's QFS?
Take a look here
http://opensolaris.org/os/project/samqfs/What_are_QFS_and_SAM/
Writes from multiple hosts go through the metadata server, basically,
that handles locking and update problems. I believe there are other
op
> On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
> > Sorry, this is a bit off-topic, but anyway:
> >
> > Ronald Kuehn writes:
> >> No. You can neither access ZFS nor UFS in that
> way. Only one
> >> host can mount the file system at the same time
> (read/write or
> >> read-only doesn't matte
On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote:
> Hi,
>
> Does ZFS flag blocks as bad so it knows to avoid using them in the future?
No it doesn't. This would be a really nice feature to have, but
currently when ZFS tries to write to a bad sector it simply tries few
times and gives up. With C
ZFS Experts,
I am looking for some answers:
1. How does dmu_tx_hold_*() routines calculate amount of space
required to modify? As actual object modification are done after calls
to dmu_tx_hold_*() routines, how does one know the amount of space
required?
2. Are dmu_tx_hold_*() routines act indepe
Depending on what hardware you have and what size the data chunks are
will determine what impact IPsec will have. WAN vs LAN isn't the issue.
As for mitigating the impact of the crypto in IPsec it depends on the
data size. If the size of the packets is > 512 bytes then the crypto
framework wi
19 matches
Mail list logo